MASt3R IMC2025 Notebook

cp311
cp312

A technical migration guide — what changed, why it changed,
and what stayed exactly the same.

Conversion Pipeline — what stays the same
The 6-stage pipeline converting MASt3R outputs into COLMAP poses is architecturally identical in both versions. Only the implementation of Stage 5 and a minor API call in Stage 6 differ.
01
extract_correspondences_nonsym()
MASt3R inference → dense descriptors → confidence-filtered pixel matches
✓ same
02
transform_keypoints_to_original()
Back-project from DUST3R crop/resize space → original image coordinates
✓ same
03
unify_keypoints_and_matches()
Deduplicate keypoints across pairs → assign global integer IDs
✓ same
04
save_unified_keypoints_and_matches()
Serialize to keypoints.h5 / matches.h5 / pairs.txt
✓ same
05
import_into_colmap()
Write HDF5 data into a COLMAP SQLite database
✗ rewritten
06
verify_matches() + incremental_mapping()
RANSAC geometric verification → pycolmap SfM reconstruction
⚠ api fix
All Changes
01
Python Wheel & pycolmap Version compat
The cp311 notebook uses pycolmap 3.11.1 installed from the /kaggle/input/pycolmap3-11/ dataset. cp312 upgrades to pycolmap 4.0.3 installed from a user notebook dataset, with a cp312-compiled wheel. All MASt3R source paths are also migrated from /kaggle/input/mast3r-fix/ to /kaggle/input/notebooks/stpeteishii/mast3r-cp312/. print(sys.version) is also added early on to explicitly verify the active Python version.
cp311
cp312
pycolmap-3.11.1-cp311-cp311-manylinux_2_28_x86_64.whl
+pycolmap-4.0.3-cp312-cp312-manylinux_2_28_x86_64.whl
02
Explicit FAISS CUDA Wheel Installation compat
cp311 installs faiss-gpu-cu12 in a single command from the mast3r wheels folder. cp312 installs three separate wheels explicitly — faiss_gpu_cu12, nvidia_cublas_cu12, and nvidia_cuda_runtime_cu12 — from a dedicated dataset, ensuring correct CUDA runtime linkage for cp312.
cp311
cp312
!pip install faiss-gpu-cu12 --no-index --find-links=.../mast3r-wheels
+!pip install faiss_gpu_cu12-1.14.1.post1-cp312-...whl
+!pip install nvidia_cublas_cu12-12.6.4.1-...whl
+!pip install nvidia_cuda_runtime_cu12-12.9.79-...whl
03
ASMK .so Extension Loading compat
ASMK's compiled C-extensions (hamming, etc.) are built per Python version. cp312 adds explicit manual loading of the cp312-compiled .so files using importlib.util.spec_from_file_location, since the cp311 binaries cannot be used.
cp311
cp312
(no explicit .so loading — relied on cp311 binary)
+_load_so("asmk.hamming", ".../hamming.cpython-312-x86_64-linux-gnu.so")
+_load_so("asmk.functional", ".../functional.py")
04
ASMK / FAISS Monkey-Patches compat
Two monkey-patches are added in cp312 to fix FAISS compatibility issues.

① FaissGpuL2Index patchcreate_index is overridden to return a CPU IndexFlatL2, bypassing GPU initialization that fails in the cp312 environment.

② Codebook.quantize patch — Rebuilds the FAISS index on demand and unifies the return signature: 3-tuple (des, word_ids, image_ids) for build_ivf and 2-tuple (des, word_ids) for query_ivf, fixing an argument mismatch that would crash ASMK retrieval.
cp312 only
+asmk_index.FaissGpuL2Index.create_index = lambda self, pts: faiss.IndexFlatL2(pts.shape[1])
+asmk_codebook.Codebook.quantize = _fixed_quantize
05
torch.serialization Safe Globals compat
Newer PyTorch versions require explicit whitelisting of classes used during torch.load(). cp312 adds torch.serialization.add_safe_globals([argparse.Namespace]) before loading the MASt3R checkpoint, preventing a WeightsOnlyError that would occur silently in cp311's older torch version.
cp311
cp312
mast3r_model = load_model(local_model_directory, device=device)
+import argparse
+torch.serialization.add_safe_globals([argparse.Namespace])
+mast3r_model = load_model(local_model_directory, device=device)
06
import_into_colmap — Full Rewrite rewrite breaking
This is the most significant change. The pycolmap 4.0.3 Database high-level API changed incompatibly, so cp311's approach of importing and calling the function from the external h5_to_db utility no longer works.

cp312 reimplements import_into_colmap from scratch using raw sqlite3:
 CREATE TABLE IF NOT EXISTS cameras, images, keypoints, matches, two_view_geometries, descriptors
 Read keypoints.h5 → INSERT INTO cameras (SIMPLE_RADIAL, f=max(w,h), cx=w/2, cy=h/2) + INSERT INTO images + keypoints blob
 Read matches.h5 → compute pair_id = id1 × 2147483647 + id2 → INSERT INTO matches blob (uint32)
The resulting database schema is identical to what pycolmap's own writer would produce — the difference is only in how it is created.
07
pycolmap API — cam_from_world() breaking
In pycolmap 3.x, cam_from_world was a plain attribute holding the pose object. In pycolmap 4.x it became a method that returns the pose. The call site in reconstruct_from_db must be updated accordingly.
cp311 — attribute
cp312 — method call
image.cam_from_world.rotation.matrix().tolist()
image.cam_from_world.translation.tolist()
+image.cam_from_world().rotation.matrix().tolist()
+image.cam_from_world().translation.tolist()
08
Empty Pair Guard in run_one_dataset robustness
cp312 adds an early-exit check after shortlisting: if no valid image pairs were found, the dataset is skipped cleanly with an informative message rather than passing an empty list into match_with_mast3r_and_save and potentially crashing downstream.
cp312 only
+if len(indexed_pairs) == 0:
+    print(f'⏭️ No pairs for "{dataset}", skipping.')
+    return f'Dataset "{dataset}" → skipped.', timings
09
traceback.print_exc() in Error Handler robustness
The except block in run_one_dataset now calls traceback.print_exc() in addition to printing the error message string, making failures significantly easier to diagnose since the full stack trace is shown.
cp311
cp312
except Exception as e:
    print(f"Error in dataset {dataset}: {e}")
+    traceback.print_exc()
10
Verbose Logging in reconstruct_from_db robustness
cp312 introduces a log() helper with sys.stdout.flush() and wraps every major step — import_into_colmap, run_verify_matches_safe, incremental_mapping — with entry/exit log lines. This makes it easy to pinpoint where a hang or crash occurs during reconstruction.
11
ThreadPoolExecutor max_workers: 2 → 1 robustness
run_mast3r_pipeline reduces the thread pool from 2 workers to 1. With the heavier memory footprint of the cp312 stack (larger pycolmap, explicit CUDA wheels), running two datasets concurrently risks OOM errors on Kaggle's GPU instances.
cp311
cp312
ThreadPoolExecutor(max_workers=2)
+ThreadPoolExecutor(max_workers=1)
Summary
Area cp311 cp312 Status
pycolmap 3.11.1 (cp311 wheel) 4.0.3 (cp312 wheel) upgraded
FAISS install single find-links install 3 explicit wheel installs upgraded
ASMK .so loading implicit (cp311 binary) explicit importlib loading new
ASMK monkey-patches none FaissGpuL2Index + Codebook.quantize new
torch.serialization not needed add_safe_globals([argparse.Namespace]) new
import_into_colmap h5_to_db helper raw sqlite3 rewrite rewritten
cam_from_world .rotation (attribute) ().rotation (method) api break
Empty pair guard absent early-exit + skip message improved
Error reporting message only message + traceback.print_exc() improved
Reconstruct logging minimal step-by-step log() with flush improved
ThreadPoolExecutor max_workers=2 max_workers=1 reduced
MASt3R pipeline logic identical — stages 1–4 unchanged