🤖 AI Summary
Gilad Bracha argues that the next generation of multi‑modal AIs will live inside programmable 2D/3D “spaces” — AR overlays, VR rooms or rich documents — and that preserving human agency requires these substrates to be open, live and deeply programmable. Rather than traditional coding, people will “program” by natural interaction (language, gesture, music, drawing) with AIs that interpret intent, engage in dialog and accept immediate feedback. Bracha warns that proprietary control of models, runtimes and hardware (e.g., smart glasses) risks locking users into manipulative, surveillant environments, so substrates must support interoperability, offline-first work, privacy, persistence and secure collaboration.
Technically, he promotes a live, self‑modifying architecture grounded in object‑capability security, distinct persistence models (program vs. program+state like Smalltalk images), and transclusion/linking concepts extending into 3D (hyperlinks as portals). His Ampleforth/Newspeak stack — running on Web/Wasm and leveraging Croquet for real‑time collaboration — is presented as an early prototype that surfaces practical tradeoffs (web accessibility vs. platform limits). For AI/ML practitioners this frames concrete engineering priorities: context‑aware models that maintain spatial/temporal state, low‑latency inference for interactive programming, capability‑based permissioning, and design for local‑first, collaborative, persistent worlds to avoid centralization and preserve user control.
Loading comments...
login to comment
loading comments...
no comments yet