NeRFs for Capturing Art [pdf] (github.com)

🤖 AI Summary
A new write-up demonstrates using Neural Radiance Fields (NeRFs) to capture and reproduce artworks with photorealistic, view-dependent detail, presenting a practical pipeline for digitizing paintings, sculptures and other cultural heritage objects. By fitting a volumetric radiance model to densely captured multi-view images, the approach preserves fine surface texture, brushstroke relief and specular effects that traditional photogrammetry often blurs or misses. The result is interactive, relightable 3D reconstructions that support virtual viewing, close inspection for conservation, and high-quality archival assets for museums and galleries. Technically, the work leverages core NeRF ideas—volumetric rendering of learned radiance and density fields optimized from calibrated images—to model complex reflectance and occlusions without explicit geometry priors. The paper discusses capture best practices (dense, well-calibrated viewpoints and controlled lighting), trade-offs (training time, memory and sensitivity to specularities or thin geometry), and potential remedies (higher-resolution models, multi-light or spectral imaging and hybrid geometry priors). Implications for the AI/ML community include new real-world datasets, opportunities to extend NeRFs with BRDF estimation or multispectral inputs, and practical demand for faster, memory-efficient NeRF variants to make art-grade digitization routine.
Loading comments...
loading comments...