🤖 AI Summary
IBM used its TechX25 stage to pivot from model-building to enterprise systems integration: it announced a commercial partnership with Anthropic to bring Claude-class models into IBM’s Watsonx ecosystem, deprecated earlier COBOL/RPG-specific assistants in favor of a unified, AI-infused IDE called Project Bob (piloted by 6,000 IBM developers with an internal productivity lift claim of ~45%). Project Bob will stitch together models from Anthropic, Mistral, Llama and IBM’s Granite family to offer code modernization, document summarization, vector search, NL-to-SQL and prebuilt industry assistants — addressing IBM’s pitch that enterprise adoption needs usable platforms, not raw APIs.
On the hardware side IBM is shipping its homegrown “Spyre” XPU as value-priced, bundled sidecars for System z mainframes (shipping Oct 28) and Power Systems (Dec 12). Spyre is sold only in eight-card bundles that can be ganged into a virtual unit with up to 1 TB shared memory, ~1.6 TB/s memory bandwidth and an aggregate performance figure IBM cites as >2.4 petaops (precision and measurement mode unclear). Cards support INT4/INT8/FP8/FP16 and claim throughput scaling with lower precision; bundles include RHEL + RHEL.AI Inference Server and will gain OpenShift.AI + Watsonx.data in 1Q26. A notable technical differentiator is Spyre’s live-migration circuitry that lets inference workloads migrate with CPUs on Power/mainframe — functionality IBM says isn’t possible with attached Nvidia/AMD GPUs — positioning IBM to sell a tightly integrated software+hardware stack tailored to enterprise modernization.
Loading comments...
login to comment
loading comments...
no comments yet