iPhone 17 Pro Doubles Qwen Image Generation On-Device (releases.drawthings.ai)

🤖 AI Summary
Apple’s iPhone 17 Pro brings a substantial on-device generative-AI boost: the new A19 Pro SoC adds GPU Neural Accelerators and other GPU improvements that Deliver roughly 2× inference speed versus the prior generation, enabling large diffusion and image models that previously required servers to run locally. Draw Things benchmarks show FLUX.1 operating at ~10s per step (two-step runs under 35s at 768×768) and 20B-class models like Qwen Image at ~13s per step (two-step runs just over 45s at 768×768). At higher resolutions the phone still performs like thin laptops — FLUX series ≈50s and Qwen Image ≈65s for 1024×1024. Technically, the win comes from improved compute for diffusion-based networks (which are compute-bound) and much better thermal headroom: unlike the iPhone 16 Pro — which begins throttling after ~1 minute at 70°F — the 17 Pro sustains higher throughput longer. For the AI/ML community this shifts important trade-offs: larger (>10B parameter) generative models can now be tested, demoed, and deployed on-device with lower latency, improved privacy and offline capability, and reduced cloud cost. Developers should still consider power/thermal constraints and step-counts, but the 17 Pro meaningfully expands what’s practical for mobile generative workflows.
Loading comments...
loading comments...