1 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
OpenAI has revamped ChatGPT's Voice mode, allowing users to interact directly within ongoing chats. Now, you can see live transcripts and visuals, like maps and photos, related to the conversation. Users can still switch back to the original interface if preferred.
If you do, here's more
OpenAI has updated ChatGPT's Voice mode, allowing users to interact with it directly within their ongoing chats. This means you can now use voice features without switching to a separate interface. When you start a voice chat by clicking the "waveform" icon next to the text field, you’ll see a live transcript of your conversation. Visual aids, such as maps or photos, can appear alongside the dialogue, enhancing the interaction.
In a demonstration, ChatGPT displayed a transcript while providing information about popular bakeries, complete with maps and images of pastries. If users prefer the original interface, they can revert to it by toggling on Separate mode in the settings. This update reflects OpenAI's push towards a more integrated and visually informative experience.
The combination of voice responses and visuals aligns with ChatGPT's multimodal capabilities. Users can already prompt the model with voice and images or videos, so adding details to voice interactions is a logical step. While Google’s Gemini Live features similar enhancements, OpenAI's approach is still focused on providing informative, real-time responses rather than interactive overlays.
Questions about this article
No questions yet.