The Future of AI Filmmaking Is a Parody of the Apocalypse, Made by a Guy Named Josh (www.wired.com)

🤖 AI Summary
Filmmaker Josh Wallace Kerrigan — the anonymous creator behind the Neural Viz/Monoverse universe — has quietly demonstrated a practicable, creative workflow for AI-first filmmaking. Using a chain of generative tools (Midjourney for concept art, FLUX Kontext for image refinement, ElevenLabs for voice timbre, Runway’s Act-One for facial motion capture, and newer video generators like Sora and Google’s Veo 2), he built a dense, coherent sci‑fi satire of talking-head TV and genre pastiche. Kerrigan writes traditional scripts and performs every role, then prompts and puppeteers models to generate characters, sets and shots; he leans into the tools’ quirks (prompt failures, style drift, content safeguards) and even incorporates inconsistencies into the story — e.g., “morph inhibitors” to explain changing renders — while using an ’80s/’90s grain aesthetic to mask artifacts. For the AI/ML community this is significant because it shifts the conversation from whether generative video is “good” to how creators can architect end‑to‑end pipelines that combine prompt engineering, model chaining, and real‑time facial mocap to produce intentional, repeatable audiovisual narratives. Technically notable points: motion‑capture-to-avatar mapping (Runway Act‑One) increases performance control; model limitations become affordances for storytelling; and rapid adoption of emerging tools amplifies both creative options and reproducibility challenges (consistency, safety filters, style drift). Neural Viz suggests a practical middle path where human authorship, iterative prompting, and model orchestration create new forms of auteurship rather than just novelty clips.
Loading comments...
loading comments...