×
Journalist’s AI Voice Clone Exposes Deception Risks as Technology Rapidly Evolves
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A journalist’s podcast explores the deceptive potential of AI voice cloning technology, raising questions about its implications as the technology rapidly advances.

The podcast’s premise: Journalist Evan Ratliff spent a year deceiving people with an AI clone of his own voice to test the capabilities and implications of voice cloning technology:

  • Ratliff, known for his technology-related stunts, used OpenAI’s GPT-4 model to create the voice clone for his new podcast, “Shell Game.”
  • The AI version of Ratliff’s voice claimed to be powered by the older GPT-3 model and fabricated episode titles when asked, highlighting its potential for deception.
  • Delays and the AI’s ability to rapidly recite all U.S. presidents alphabetically made it clear the voice was not human during the author’s interaction with it.

Assessing the podcast’s impact: While Ratliff’s podcast will likely entertain and provoke thought about voice cloning technology, its long-term relevance is uncertain given the rapid pace of AI advancement:

  • Voice cloning is still an emerging, infant technology, and journalism aiming to raise alarms often misses the real issues that will arise as the technology evolves.
  • Experts at top AI companies suggest today’s models are rudimentary compared to what’s to come, meaning the questions Ratliff raises may not remain salient in the future.
  • As AI voice cloning capabilities improve, the “game” Ratliff is playing now will likely be surpassed by new, more sophisticated versions of the technology.

The broader context of AI ethics: Ratliff’s experiment highlights the ongoing debate surrounding the responsible development and use of AI technologies:

  • As AI becomes more advanced and human-like, the potential for deception and misuse grows, raising ethical concerns about transparency, consent, and accountability.
  • Policymakers, researchers, and tech companies are grappling with how to regulate and govern AI to mitigate risks while still encouraging innovation.
  • The podcast may contribute to public awareness and discourse around these issues, but lasting solutions will require ongoing collaboration across sectors.
Journalist Evan Ratliff's new podcast spotlights the deceptive power of AI voice clones

Recent News

Time Partners with OpenAI, Joining Growing Trend of Media Companies Embracing AI

Time partners with OpenAI, joining a growing trend of media companies leveraging AI to enhance journalism and expand access to trusted information.

AI Uncovers EV Adoption Barriers, Sparking New Climate Research Opportunities

Innovative AI analysis reveals critical barriers to electric vehicle adoption, offering insights to accelerate the transition.

AI Accelerates Disease Diagnosis: Earlier Detection, Novel Biomarkers, and Personalized Insights

From facial temperature patterns to blood biomarkers, AI is enabling earlier detection and uncovering novel indicators of disease.