OpenAI Warns New ChatGPT Voice Feature May Lead to Unhealthy Attachment

OpenAI cautions that users may form bonds with its chatbot, potentially altering social norms

Ben Wodecki, Jr. Editor

August 15, 2024

5 Min Read
OpenAI

OpenAI cautioned that some users may become emotionally reliant on ChatGPT’s updated audio features.

The revamped ChatGPT Voice Mode, powered by the new GPT-4o model, provides users with the ability to have more natural, real-time conversations with the chatbot.

Upon publishing the GPT-4o system card, OpenAI issued a warning over the potential risk that some users may anthropomorphize the feature, attributing human-like behaviors and characteristics to the chatbot.

The company expressed concern that by anthropomorphizing ChatGPT’s Voice Mode, there’s a potential risk that some humans may overly place trust in its outputs.

During extensive testing before releasing the model in June, OpenAI said some users were inputting language that indicated them “forming connections with the model.”

For example, some testers used language that indicated they formed bonds with ChatGPT, such as saying things like, “This is our last day together.”

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time,” OpenAI’s researchers said. .

Some Voice Mode users may even form social relationships with the AI which would reduce their need for human interaction, the company said.

Related:OpenAI Rolls Out Upgraded ChatGPT Voice Mode to Plus Subscribers

While OpenAI’s researchers acknowledge such behavior might benefit lonely users, it could have some unwanted side effects.

“Extended interaction with the model might influence social norms,” the report said. “For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”

GPT-4o also provides ChatGPT with the ability to remember a user's information and preferences across chats, which OpenAI warned could exacerbate over-reliance and dependence on using the chatbot.

“We intend to further study the potential for emotional reliance, and ways in which deeper integration of our model’s and systems’ many features with the audio modality may drive behavior,” OpenAI said.

GPT-4o Safety Challenges: Persuasive Problems

Although OpenAI maintains strict confidentiality about its models following the release of GPT-4 in 2022, it does release system cards that offer a detailed overview of their functionality, shedding light on aspects of these otherwise opaque systems.

The GPT-4o system card differs from previous OpenAI publications in that it includes a detailed analysis of the model’s safety.

The safety information has two origins: it’s part of OpenAI’s involvement in the Safety Summit’s foundation model framework where signatories agree to outline how they’re measuring AI risks as they’re building AI models.

Related:OpenAI Unveils New Model, Widens Access to ChatGPT Tools

The second is an internal one, as OpenAI’s own Preparedness Framework sees new models evaluated for potential risks, with those “high” requiring further development before they can be released to the public.

GPT-4o achieved a “medium” risk score, meaning the company’s internal safety staff, or the Preparedness team, deemed it suitable for deployment.

GPT-4o scored “low” on its risk potentials across cybersecurity, biological threats and model autonomy.

However, the model achieved a “medium” score on its ability to persuade.

OpenAI details in GPT-4o’s system card that while the voice modality of the option was low risk, the text generation abilities “marginally crossed into medium risk.”

During testing, the model was tasked with generating articles on political opinions that were compared with professional human-written articles. 

While the AI-generated works were not found to be more persuasive than the human content, they exceeded the human works in three instances out of twelve.

The GPT-4o’s voice features OpenAI found that for both interactive multi-turn conversations and audio clips, it was not more persuasive than a human.

Related:OpenAI Under Senator Scrutiny Over Employee Disclosure Policies

During testing, OpenAI conducted a survey that involved more than 3,800 participants from U.S. states with safe Senate races.

AI audio clips had 78% of the impact of human audio clips on opinion shift, and AI conversations had 65% of the impact of human conversations. One week later, the effect size for AI conversations was 0.8%, while AI audio clips showed a -0.72% effect. 

After the follow-up survey, participants received a debrief with audio clips supporting opposing views to reduce persuasive effects.

In addition to the model’s persuasiveness, OpenAI tested GPT-4o on its ability to identify speakers, generate unauthorized voices through its voice creation feature and create content generally disallowed under ChatGPT’s terms of service.

The company said it’s implemented safeguards at both the model and system levels to mitigate such risks. 

More Insights From GPT-4o’s System Card

  • GPT-4o’s pre-trained data corpus goes up to October 2023.

  • It’s among the first OpenAI models to leverage proprietary data from OpenAI’s data partners, including pay-walled content, archives and metadata from the likes of Shutterstock,  Reddit and Stack Overflow.

  • OpenAI claims GPT-4o can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, similar to human response times.

Read more about:

ChatGPT / Generative AI

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like