Apple uses 3D Gaussian splatting for Personas and 3D conversions of photos (www.cnet.com)

🤖 AI Summary
Apple has taken its Vision Pro “Personas” out of beta: the headset now creates realistic 3D replicas of users from a small handful of photos and facial expressions, rendering people as lifelike avatars for FaceTime and shared virtual spaces. The secret sauce is Gaussian splatting — a machine-learning technique that knits multiple 2D images into dense 3D representations — combined with a “concert” of neural networks (Apple says over a dozen models, recently streamlined). VisionOS 26 improves multi-angle fidelity (eyes, eyelashes, jewelry, and seamless face‑body scans), runs the processing on-device, and supports collaborative sessions with up to five participants and mixed local/remote presences. This matters because it advances practical telepresence and identity representation in AR/VR: Persona scans feel more like being “there” than traditional 2D video or cartoon avatars, enabling potential uses from remote collaboration to medical training. Technical implications include efficient, on-device 3D reconstruction with minimal input photos, tighter body-and-face integration, and reuse of Gaussian splatting for immersive 3D conversions of photos. Apple limits users to a single Persona today and keeps the experience Vision Pro–centric for now, but the approach points to broader cross‑device possibilities (iPhone/AR glasses) alongside ongoing questions about accessibility, privacy, and how authentic virtual identity should be managed.
Loading comments...
loading comments...