Adobe's experimental AI tool can edit entire videos using one frame (www.theverge.com)

🤖 AI Summary
At Adobe Max, Adobe unveiled a set of experimental “sneaks” that use generative AI to radically simplify photo, video, and audio editing. The standout, Project Frame Forward, lets editors pick or remove an object in a single frame and automatically propagate that selection and edit across an entire clip—no manual masks—while filling or generating context-aware background and reflections (example: a generated puddle that reacts to an existing cat). Project Light Touch uses generative models to reshape and relight scenes: change light direction, diffusion, color temperature or add dynamic light sources that wrap around objects in real time. Project Clean Take applies voice‑editing models to change prosody, emotion, or even replace words while preserving a speaker’s vocal identity, and includes automatic source separation to isolate and suppress background sounds. Technically these demos show advances in frame-to-frame propagation, semantic understanding for context-aware insertion, physics-informed rendering approximations for reflections and occlusion, and neural voice conversion with source separation. For creators and ML practitioners this points to workflows that move labor from manual masking and ADR to AI-driven, non‑destructive edits—speeding iteration and enabling new creative effects—while raising implementation challenges around temporal consistency, realism, compute cost, and ethical use. The tools are experimental and not yet public, though past sneaks have graduated into Creative Cloud features, so similar capabilities may arrive in future products.
Loading comments...
loading comments...