5 things you need to know about ChatGPT's big voice mode update (www.techradar.com)

🤖 AI Summary
OpenAI has rolled ChatGPT’s voice mode directly into the main chat window, ending the old “separate voice mode” that reset conversations. You can now speak during an existing text thread and see live transcripts, prior messages, images, maps and local weather appear inline — turning voice chats into a hybrid voice+text+widget experience. The classic separate interface remains available via Settings for users who prefer it, and the update is rolling out on mobile and web (just update your app). The update fixes major UX complaints about continuity and visibility, but there’s an important technical caveat: voice sessions don’t use the newest GPT-5.1 text model. Paid users’ voice chats start on GPT-4o (OpenAI’s advanced voice model) and fall back to GPT-4o mini after allotted GPT-4o minutes; free users begin on GPT-4o mini with limited daily use. That means answers and capabilities can differ between voice and text chats, with implications for fidelity, cost, and latency. For developers and power users this signals a stronger multimodal, conversational UI while highlighting ongoing trade-offs between model parity, pricing tiers and feature access.
Loading comments...
loading comments...