OpenAI’s ChatGPT now has eyes and ears

The popular chatbot’s creator, OpenAI, announced Monday that it is adding voice chat with ChatGPT.

The race to dominate the AI space has been fierce. OpenAI appears to be setting the pace in this race. Audio and multimodal features have quietly been in development. Meta launched AudioCraft for AI-generated music, and Google Bard and Microsoft Bing have multimodal chat features. Last week, Amazon previewed a revamped Alexa powered by its LLM (large language model), and Apple is testing AI-generated voice with Personal Voice.

How will it work? You tap a button and speak your question; ChatGPT converts it to text, feeds it to the large language model, gets an answer, converts it back to speech, and speaks it in what it calls “human-like” voice. Drawing a similar experience, it should feel like talking to Alexa or Google Assistant, but OpenAI hopes the improved technology will yield better answers. OpenAI appears to be ahead of the pack in rebuilding virtual assistants to use LLMs.

In a blog post, OpenAI suggested using this new feature to “request a bedtime story for your family or settle a dinner table debate.”

In an OpenAI update demo, a user asks ChatGPT to tell a story about “the super-duper sunflower hedgehog named Larry.” With a human-sounding voice, the chatbot can tell stories and answer questions like, “What was his house like?” and “Who is his best friend?”

Within two weeks, ChatGPT Plus and Enterprise subscribers will get the new app features. (Plus costs $20 a month, and Enterprise is only available to businesses.)

What are your thoughts on AI services now adding more human-like capabilities?

Leave a Reply