Microsoft's new AI agents won't just help us code, now they'll decide what to code (www.zdnet.com)

🤖 AI Summary
At Ignite 2025 Microsoft unveiled a stack that pushes autonomous agents from coding assistants to decision-makers that can assemble, extend and operate software. Key announcements: Agent 365 treats agents like first-class "users" with identities, permissions, lifecycle management and auditability; Foundry supplies a unified catalog of Model Context Protocol (MCP) tools (1,400 systems at launch, e.g., SAP, Salesforce, HubSpot) and lets developers expose any API as an MCP server; and a trio of “IQ” services—Work IQ, Fabric IQ, and Foundry IQ—give agents workflow awareness, semantic business-model context, and long-term knowledge/memory. Taken together, these building blocks let agents snap together services via MCP rather than hand-coding integrations, and act on intent with state and context. Technically this matters because MCP standardizes two-way LLM↔service communication (removing bespoke API glue), while Agent 365’s governance model addresses security, entitlements and audit needs for goal-driven, stateful agents. The IQ layer supplies the shared context and memory agents need to make repeatable, context-aware decisions. Microsoft’s stack is not full self-writing software yet—progress will be incremental and fragile: current agentic coding still needs heavy human supervision due to misunderstandings, hallucinations and drift. Still, the announcements map a practical architecture for enterprise agents to become “mashup” builders, shifting how applications are assembled and governed while keeping human oversight central.
Loading comments...
loading comments...