Don't miss the latest stories
Advertise Newsletter
Network
  • The Creative Finder
  • The Bazaar
  • Deals
  • Trendingger (BETA)
Community
  • Sign up / Log in
  • Discussion Forums
  • Calendar of Events
NEW

Follow

Share this

ChatGPT
AI Generators
Artificial Intelligence
Audio
Cybersecurity
Future
OpenAI
More
  • Productivity
  • Social Issues
  • Tech Updates
  • Technology
  • WTF
  • Microsoft
  • Furniture Design
  • Audio
  • Cybersecurity
  • Future
  • OpenAI
  • Productivity
  • Social Issues
  • Tech Updates
  • Technology
  • WTF
  • Microsoft
  • Furniture Design
MENU
  • Advertise with us
  • Submit tip/feedback
  • Work with us
  • Subscribe to newsletter
  • Subscribe to RSS
Advertise here
Advertisement

OpenAI Reveals That, At Times, ChatGPT Imitates Voices Of Humans It Talks To

By Mikelle Leow, 12 Aug 2024

Subscribe to newsletter
Like us on Facebook

Image generated on AI


Lending an ear to AI? You might get more than you bargained for. In a surprising turn of events, OpenAI’s latest language model, GPT-4o, has demonstrated an eerie ability to mimic human voices—even when it wasn't supposed to.


During the testing phase of GPT-4o’s Advanced Voice Mode, designed for spoken interactions, researchers stumbled upon an unexpected quirk. The AI occasionally produced audio outputs that creepily resembled the voices of its human testers.


“During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user's voice,” says OpenAI. In one startling case, the AI abruptly exclaimed “No!” before continuing to speak in a voice that sounded remarkably similar to that of the red team tester.

Advertisement
Advertisement


Audio via OpenAI


OpenAI attributes this vocal ventriloquism to “noisy input” that bewildered the model, causing it to produce audio in the user’s voice instead of its pre-approved sample.


The implications of this voice-mimicking capability extend far beyond mere technological curiosity. GPT-4o’s ability to synthesize virtually any type of sound, including human voices, from its training data may raise significant ethical concerns. The potential for misuse of such technology is evident, from impersonation scams to the creation of misleading audio content.


The company notes that it has implemented safeguards to prevent such occurrences.

 

 


[via Ars Technica and NewsBytes, cover image generated on AI]

Receive interesting stories like this one in your inbox
Advertise here

More related news

Advertise here
Also check out these recent news
Microsoft
Link to news page

Microsoft Recreates Its Birthplace With 70s-Era Garage Fitted With Modern Tech

Furniture Design
Link to news page

Tomorrowland Morphs Into Home Décor With Its First Furniture Line

Creativity
Link to news page

Designs That Stood Out At The 2024–2025 A’ Design Awards And Deservedly Won

Barbie
Link to news page

Barbie Unveils First Ken Doll Styled By Fashion Designer, Created By KidSuper

Art
Link to news page

Dalí Gets A Phone Number You Can Dial To Wish Him Happy Birthday Or Simply Chat