Apple's Foundation Models framework unlocks new intelligent app experiences (www.apple.com)

🤖 AI Summary
Apple today opened its Foundation Models framework to developers with iOS 26, iPadOS 26, and macOS 26, enabling apps to run Apple’s on-device large language model (the core of Apple Intelligence) for free, offline, and with user data kept local. Early adopters — from fitness apps like SmartGym to journaling (Stoic), immersive education (CellWalk), and productivity tools (Stuff, VLLO) — are already using the framework to generate personalized prompts, contextual explanations, workout summaries, video-editing suggestions, and task parsing without sending data to cloud services. Apple highlights real-world features such as conversational scientific explanations (tool calling), structured workout routines, and dynamic natural-language task entry that demonstrate how on-device LLMs can power richer, privacy-preserving UX. Technically, the Foundation Models framework integrates with Core ML, Vision, and the ImageCreator API, supports tool calling and generable structures for grounded, structured outputs, and leverages local user profiles and history to tailor responses. That lowers backend complexity and costs for developers, speeds iteration, and expands edge-AI use cases (education, health, productivity, creative tools) while keeping inference free and offline. For the AI/ML community, this is a meaningful push toward mainstreaming capable, on-device generative models — shifting emphasis from cloud-only models to efficient, privacy-focused edge deployments and new research/engineering priorities around model size, latency, grounding, and multimodal integration.
Loading comments...
loading comments...