What you need to know
- OpenAI took to X to announce that it will delay the launch of the advanced Voice Mode, which was slated for release this spring.
- The company is testing the new voice model on a small group of people for feedback and learning before rolling it out to all Plus users.
- If all goes as anticipated, the other Plus users will get access to the advanced voice model in the fall.
Open AI quickly tweeted on Tuesday (June 26) that it will be holding off on launching the advanced Voice Mode that was demoed last month. It planned to roll out this feature widely in the software's spring update; however, it looks like people who were looking to experiment with it need to wait a bit more.
The new voice assistant is said to have "human-like" conversations that users can interrupt, emoting and bringing life-like interactions like laughter, sarcasm, and humor that come as a relief from the usual robotic prompts we expect from an assistant. However, despite promising a quick rollout to subscribed users, OpenAI has now announced a delay in releasing the upgraded "Voice Mode" (via Android Police).
We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about:We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch.…June 25, 2024
While it continues to test this new voice mode, the company says it will be giving access to it to a small group of ChatGPT Plus users in late June, and some of them have already received an update on their phones. One such user who got access took to X to share an invite she received. The invite stated, "advanced voice is in limited Alpha, it may make mistakes, and access (to it) is subject to change."
Open AI stated that its systems will use the audio and video content that the user shares via Chat GPT to train the model better. Users can also opt out of sharing their data and training the model by hitting " improve the model for everyone" in settings.
👀 pic.twitter.com/6RfCMkanN4June 26, 2024
The user also posted a tweet with the new voice of "Sky" that appears to be different from the one we heard at the Chat GPT- 4o announcement. Initially, users on social media pointed out the similarities between the voice of the assistant and Scarlett Johansson, the award-winning actress. However, OpenAI denied that Sky's voice was based on Johansson and even released a blog post explaining how the voices were chosen. The actress countered those claims with a statement about how OpenAI approached her about officially voicing GPT-4o, which she declined, and then the company quickly removed the Sky voice option from its platform.
The new voice has a more "cheery tone," the user states, as the voice model describes the weather in San Francisco. Other users on the same tweet thread seemed unhappy with the new voice. One user said, "It's horrifying; someone help it," while others were particularly "cringed" by it, listen in below.
GPT-4o talking about SF summer but in a very cheery tone pic.twitter.com/6taoPeVLdhJune 26, 2024
OpenAI added in its tweet, that based on the feedback they recieved from this small group of testers, they will be planning to roll out the model to Plus users in the fall.
"Exact timelines depend on meeting our high safety and reliability bar. We are also working on rolling out the new video and screen sharing capabilities we demoed separately, and will keep you posted on that timeline."