Microsoft's AI agents can decide what to code now - what that means (www.zdnet.com)

🤖 AI Summary
At Ignite 2025 Microsoft unveiled an architecture for truly agentic software: Agent 365 treats autonomous agents as managed “users” with identities, permissions, auditing and lifecycle controls; Foundry provides a unified catalog of 1,400 MCP (Model Context Protocol) tools and an MCP extensibility layer so agents can snap into enterprise systems (SAP, Salesforce, HubSpot, etc.) without bespoke API wiring; and three “IQ” layers—Work IQ, Fabric IQ, and Foundry IQ—supply shared context, semantic business meaning, and long-term memory so agents can reason about intent, past actions, and organizational data. Technically this pairs two-way MCP communications (AIs can call services and services can prompt AIs) with governance primitives and context models that let agents assemble, extend, and deploy solutions from existing building blocks rather than writing everything from scratch. The significance is practical and immediate: Microsoft is laying the missing infrastructure for agents to act as digital workers that build mashups of enterprise capabilities, potentially accelerating automation and app composition. But the company and practitioners warn this won’t be turnkey—agentic coding is still error-prone (hallucinations, misinterpretation, flaky iterations) and will require substantial human oversight, robust governance, and security controls. The announcements map a realistic, incremental roadmap for agent-driven software in enterprises while highlighting crucial questions about reliability, auditability, and how much human supervision remains required.
Loading comments...
loading comments...