AI Cloned a Senator's Voice to Show a Deepfake Election
Updated at:
3/3/2025
Edited and Reviewed by Hey It's AI editors
AI deepfakes just fooled politicians! What happens when fake voices spread election lies? Do we build safeguards or wait for chaos?
AI Cloned a Senator's Voice to Show a Deepfake Election
Imagine picking up your phone and hearing your favorite politician urging you to vote. Sounds normal, right? But what if I told you it wasn't actually them? What if that convincing voice was nothing more than a few lines of code and some clever AI trickery? Welcome to the future, where deepfake voices aren't just sci-fi—they're potential election disruptors.
Jacqui Lambie vs. AI
Tasmanian Senator Jacqui Lambie recently got a taste of what AI voice deepfakes can do. ABC NEWS Verify used inexpensive AI tools to clone her voice and create a robocall. The result? A voice message so eerily accurate that even some of her supporters were fooled. That's right—people who have heard her speak dozens of times couldn't tell the difference. Let that sink in.
The Moment of Truth
Picture this. Lambie sits in front of a computer, listening as 'her' voice plays from the speakers: ‘G’day Tasmanians, it’s Senator Jacqui Lambie here...’ The words roll out smoothly, the accent is spot-on, even the tone sounds unmistakably like her. Spooky? Absolutely.
Now, she knew this was a test. But what if she wasn’t in on it? What if someone used this tech to make fake political promises or announce bogus policies? Yikes.
AI Voice Cloning: Easy, Cheap, and Scarily Good
You might think this level of deepfake trickery requires some high-tech lab and a team of AI wizards. Nope. All it took was readily available, inexpensive AI tools—stuff you or I could play around with on a laptop. That’s right, the ability to clone a politician’s voice isn’t locked away in a government lab; it's out in the wild for anyone to use.
How Does It Work?
At a basic level, AI voice cloning typically involves:
- Gathering voice samples (a few minutes of clear audio is usually enough)
- Feeding them into machine learning models designed to mimic speech patterns
- Tweaking the output to improve accuracy and match the speaker’s cadence
Voila! You've got yourself a convincing AI-generated voice ready to make fake political statements... or, you know, prank your friends.
Should We Be Worried?
Short answer: YES.
Long answer: We're entering a world where hearing something no longer guarantees it's real. If deepfake voices can fool people, imagine the chaos they could unleash in an election. Misinformation spreads like wildfire online, and deepfake robocalls could make it even worse. Imagine waking up to an AI-generated call telling people that their polling station has changed? Or that a candidate has dropped out? Dangerous stuff.
What Can We Do?
We can't stop technological progress, but we can get smarter about spotting fakes. Here are some ways to stay ahead:
- Be skeptical of unexpected robocalls, even if they sound legit
- Verify political messages through official sources
- Encourage governments to implement AI detection measures
If you thought fake news was bad, deepfake voices are about to take things to a whole new level.
Final Thoughts
AI is incredible, but like any powerful tool, it can be used for good or evil. What happened with Jacqui Lambie was just a controlled experiment, but it exposed a glaring issue—our ears can no longer be trusted. As AI enthusiasts and developers, should we be building better detection systems? Or do we just sit back and wait for the first deepfake election scandal? Let’s talk.
Get to know the latest AI news
Join 2300+ other AI enthusiasts, developers and founders.
Imagine answering a call from your favorite politician, only to find out later it wasn’t them—it was AI! That’s exactly what happened when Jacqui Lambie’s voice got cloned using cheap AI tools. The result? Scarily real deepfake robocalls. If hearing isn’t believing anymore, how do we fight election misinformation?
- CommentsShare Your ThoughtsBe the first to write a comment.