Spatially-Varying Autofocus (imaging.cs.cmu.edu)

🤖 AI Summary
At ICCV 2025 Qin, Sankaranarayanan and O’Toole introduce "spatially-varying autofocus," a prototype camera that optically produces all-in-focus images by steering the focus of each pixel (or superpixel) to the scene depth rather than relying on small apertures, computational deblurring, or focus stacking. Their system inverts a Split‑Lohmann display: a Lohmann lens plus a phase-only spatial light modulator (HOLOEYE GAEA2) lets different image regions focus at different depths, and a spatially‑varying autofocus algorithm iteratively estimates a per-location focus map from contrast and dual-pixel disparity cues. The result is a single, large‑aperture capture that brings every scene point into optical focus, preserves full spatial resolution, and even yields a depth map — all without post-capture image synthesis. Technically, they extend Contrast‑Detection (CDAF) and Phase‑Detection (PDAF) autofocus to operate spatially: CDAF finds an optimal focus per superpixel via contrast maximization (needs multiple captures for geometry), while PDAF uses dual‑pixel disparity from a single capture to produce a focus map suitable for dynamic scenes. Their bench prototype pairs the GAEA2 SLM (3840×2160, 3.74 μm) with a Canon EOS R10 dual‑pixel sensor and demonstrates PDAF at 21 FPS after sensor streaming modifications. Demonstrations include freeform depth‑of‑field control, removal of thin occluders by defocusing them to background depths, and robust operation on moving scenes — opening new directions for optics‑first depth control in computational imaging and video.
Loading comments...
loading comments...