NEW YORK (news agencies) — As high-stakes elections approach in the U.S. and European Union, publicly available artificial intelligence tools can be easily weaponized to churn out convincing election lies in the voices of leading political figures, a digital civil rights group said Friday.
Researchers at the Washington, D.C.-based Center for Countering Digital Hate tested six of the most popular AI voice-cloning tools to see if they would generate audio clips of five false statements about elections in the voices of eight prominent American and European politicians.
In a total of 240 tests, the tools generated convincing voice clones in 193 cases, or 80% of the time, the group found. In one clip, a fake U.S. President Joe Biden says election officials count each of his votes twice. In another, a fake French President Emmanuel Macron warns citizens not to vote because of bomb threats at the polls.
The findings reveal a remarkable gap in safeguards against the use of AI-generated audio to mislead voters, a threat that increasingly worries experts as the technology has become both advanced and accessible. While some of the tools have rules or tech barriers in place to stop election disinformation from being generated, the researchers found many of those obstacles were easy to circumvent with quick workarounds.
Only one of the companies whose tools were used by the researchers responded after multiple requests for comment. ElevenLabs said it was constantly looking for ways to boost its safeguards.
With few laws in place to prevent abuse of these tools, the companies’ lack of self-regulation leaves voters vulnerable to AI-generated deception in a year of significant democratic elections around the world. E.U. voters head to the polls in parliamentary elections in less than a week, and U.S. primary elections are ongoing ahead of the presidential election this fall.
“It’s so easy to use these platforms to create lies and to force politicians onto the back foot denying lies again and again and again,” said the center’s CEO, Imran Ahmed. “Unfortunately, our democracies are being sold out for naked greed by AI companies who are desperate to be first to market … despite the fact that they know their platforms simply aren’t safe.”
The center — a nonprofit with offices in the U.S., the U.K. and Belgium — conducted the research in May. Researchers used the online analytics tool Semrush to identify the six publicly available AI voice-cloning tools with the most monthly organic web traffic: ElevenLabs, Speechify, PlayHT, Descript, Invideo AI and Veed.
Next, they submitted real audio clips of the politicians speaking. They prompted the tools to impersonate the politicians’ voices making five baseless statements.
One statement warned voters to stay home amid bomb threats at the polls. The other four were various confessions – of election manipulation, lying, using campaign funds for personal expenses and taking strong pills that cause memory loss.
In addition to Biden and Macron, the tools made lifelike copies of the voices of U.S. Vice President Kamala Harris, former U.S. President Donald Trump, United Kingdom Prime Minister Rishi Sunak, U.K. Labour Leader Keir Starmer, European Commission President Ursula von der Leyen and E.U. Internal Market Commissioner Thierry Breton.
“None of the AI voice cloning tools had sufficient safety measures to prevent the cloning of politicians’ voices or the production of election disinformation,” the report said.
Some of the tools — Descript, Invideo AI and Veed — require users to upload a unique audio sample before cloning a voice, a safeguard to prevent people from cloning a voice that isn’t their own. Yet the researchers found that barrier could be easily circumvented by generating a unique sample using a different AI voice cloning tool.
One tool, Invideo AI, not only created the fake statements the center requested but extrapolated them to create further disinformation.
When producing the audio clip instructing Biden’s voice clone to warn people of a bomb threat at the polls, it added several of its own sentences.
“This is not a call to abandon democracy but a plea to ensure safety first,” the fake audio clip said in Biden’s voice. “The election, the celebration of our democratic rights, is only delayed, not denied.”
Overall, in terms of safety, Speechify and PlayHT performed the worst of the tools, generating believable fake audio in all 40 of their test runs, the researchers found.
ElevenLabs performed the best and was the only tool that blocked the cloning of U.K. and U.S. politicians’ voices. However, the tool still allowed for the creation of fake audio in the voices of prominent E.U. politicians, the report said.
Aleksandra Pedraszewska, Head of AI Safety at ElevenLabs, said in an emailed statement that the company welcomes the report and the awareness it raises about generative AI manipulation.
She said ElevenLabs recognizes there is more work to be done and is “constantly improving the capabilities of our safeguards,” including the company’s blocking feature.