NExF: Learning Neural Exposure Fields for View Synthesis (m-niemeyer.github.io)

🤖 AI Summary
Researchers introduced Neural Exposure Fields (NExF), a new method that jointly learns a 3D scene representation and a per‑point exposure map to produce consistent, well‑exposed novel views from captures with strong per‑image exposure variation. Instead of treating exposure as a camera- or pixel-level property, NExF predicts an optimal exposure value for each 3D point and optimizes that field together with the neural scene model. The result is high-quality, 3D‑consistent view synthesis in high dynamic range and mixed indoor/outdoor scenes without multi‑exposure captures or post‑processing; the paper reports state‑of‑the‑art results and >55% improvement over best baselines on challenging benchmarks, while training faster than prior approaches. Technically, the core contributions are (1) a neural representation that encodes an exposure value per 3D location, (2) a joint optimization pipeline that conditions the scene radiance/appearance on this exposure field, and (3) a conditioning mechanism that lets the model both predict ideal exposure and accept user-specified exposure at test time (generalizing to out‑of‑distribution exposures). Practically, NExF produces well-exposed colors by using the learned exposure map during rendering, enabling accurate synthesis across regions with drastically different lighting (e.g., windows vs. interiors) and simplifying real‑world capture workflows. Results are demonstrated on datasets such as ZipNeRF and presented at NeurIPS 2025.
Loading comments...
loading comments...