Learning Lens Blur Fields (blur-fields.github.io)

🤖 AI Summary
Researchers introduce "lens blur fields," a compact neural representation (tiny MLPs) that models a camera’s point spread function (PSF) as a continuous, high‑dimensional function of image‑plane location, focus setting and optionally scene depth. The MLP captures combined optical effects—defocus, diffraction, aberrations—and sensor specifics like color filters and micro‑lenses, producing a device‑specific PSF that varies across the frame. The capture pipeline is practical: record short focal stacks of monitor patterns with a phone or camera, run a non‑blind deconvolution to recover impulse responses, and train the MLP to yield a continuous 5D (and in some cases 6D) blur field. The authors also release the first dataset of such blur fields for smartphones and SLR lenses. Significance: lens blur fields enable accurate, compact models of real optics that reveal subtle device‑level differences (even between identical phone models), unlock more realistic depth‑of‑field rendering, and improve device‑specific deblurring and image restoration. Because the representation is continuous and sensor‑aware, it can be used for forensic device identification, improved synthetic rendering in graphics, and tighter camera ISP corrections or calibration. The dataset and method lower the barrier to measuring per‑device optics and integrating physical lens behavior into ML systems for photography and vision.
Loading comments...
loading comments...