🤖 AI Summary
Developer Steven Troughton-Smith used his new Quick Subtitles app (built on Apple’s speech frameworks) to benchmark on-device transcription across four Apple Silicon devices running OS 26. He transcribed a 92 MB, 50‑minute MP3 and averaged multiple runs with low variance: M4 Pro (MacBook Pro) 30.2 s, A19 Pro (iPhone 17 Pro) 35.1 s, A18 Pro (iPhone 16 Pro Max) 56.4 s, A18 (iPhone 16e) 56.7 s. The A19 Pro nearly matches the M4 Pro and transcribes about 60% faster than the A18 Pro, producing a full 50‑minute transcript in well under a minute.
For the AI/ML community this is a strong real‑world indicator that Apple’s latest SoC and neural engine deliver massive on‑device ASR throughput—far faster than real time—making offline, low‑latency transcription and captioning practical on a phone. The results suggest substantial model and runtime optimizations in Apple’s speech stack and hardware acceleration that lower reliance on cloud inference, improve privacy, reduce latency, and open developer opportunities for local ML tasks (batch transcription, podcasts, automated captions) previously constrained to desktops or servers. The benchmark’s consistency across runs also lends confidence to these performance gains for production use.
Loading comments...
login to comment
loading comments...
no comments yet