🤖 AI Summary
Chrome hides an on-device LLM — Google’s Gemini Nano — and you can enable it today by flipping a few developer switches. Open chrome://chrome-urls/ to enable internal debugging pages, then go to chrome://flags/#prompt-api-for-gemini-nano-multimodal-input and set it to Enabled, relaunch Chrome, and visit chrome://on-device-internals/ to “Load Default.” (Advanced users can run await LanguageModel.availability() from the console to poke availability.) Once loaded the model runs locally in a Chrome UI: text prompts work like a chatbot, and it also supports audio transcription and image analysis. You can verify it’s truly local by turning off Wi‑Fi and continuing to query it.
This matters because it demonstrates a practical shift toward lightweight, privacy-preserving LLMs embedded in desktop software. Gemini Nano runs entirely on your machine (a few gigabytes of model data) with no cloud calls, so it’s free to use, offline-capable, and limited only by your device’s CPU/GPU and the model’s capacity. For developers and learners it’s a low-friction way to demystify LLM behavior and experiment with multimodal capabilities; for the industry it signals a broader trend toward integrating efficient local models into platforms rather than relying solely on cloud-hosted services.
Loading comments...
login to comment
loading comments...
no comments yet