“What a disaster,” are the words thousands of New Hampshire voters heard last month when they received robocalls purporting to be from President Biden. The voice on the other end sounded like the president, and the catchphrase was presidential. But the message that Democrats shouldn't vote in the next primary makes no sense.
“Your vote will make a difference in November, not this Tuesday,” the voice said.
It quickly became clear that the voice was not Biden at all. It was a product of artificial intelligence. Bloomberg reported that Eleven Labs, the maker of the AI voice replication software that is believed to have created the digital voices, has banned the accounts involved. On Tuesday, New Hampshire's attorney general announced that a Texas telemarketing company was behind the call and is being investigated for possible illegal voter suppression.
Spoofing robocalls is nothing new. But creating convincing hoaxes has become easier, faster, and cheaper thanks to generative AI tools that can create realistic images, videos, and audio that depict things that never happened.
AI-generated deepfakes are being used to spread false information in elections around the world, and policymakers, tech companies and governments are trying to play catch-up.
“We don't really think about it. [AI] “This is an independent threat, but it's an amplifying threat,” said Dan Weiner, director of the Elections and Government Program at the Brennan Center for Justice at New York University School of Law.
He worries that AI will accelerate efforts to discourage voters and spread false claims. This is especially true in the run-up to elections, when journalists and campaign workers have little time to check facts or debunk mistakes.
That appears to have happened in Slovakia last fall, just days before voters went to the polls. Fake audio that appears to show a candidate discussing voter fraud and raising beer prices has begun circulating online. His pro-Western party ultimately lost to a party led by pro-Russian politicians.
Because the stakes were high and the deepfake appeared at a critical moment, “there's a plausible case that it actually influenced the outcome,” Weiner said.
While high-profile fakes like the Biden robocall get a lot of attention, Josh Lawson, director of AI and democracy at the Aspen Institute, focuses on how AI can be used for personalized targeting. I'm guessing.
Lawson, a former election lawyer who previously worked in elections at Facebook owner Meta, said: “We're moving towards a stage where real-time synthetic voice conversations will be possible, probably before the election itself.'' “This technology is progressing rapidly.”
He imagines a scenario where a malicious actor deploys AI that sounds like a real human to call voters and give them false information about a particular polling location. This could be repeated for other voters who use more than one language.
He's also concerned about AI fakes targeting low-profile elections, especially given the collapse of local news.
“What we're concerned about…is not a massive, malicious deepfake from someone at the top of the ticket. All kinds of national news outlets will be out there to verify that,” Lawson said. “It's about your local mayoral election. It's about misinformation that becomes increasingly difficult for local journalists to tackle, despite the presence of local journalists. So synthetic media “We expect that dealing with candidates will be particularly difficult for candidates.” ”
It is already illegal under federal law to deceive voters, such as spreading false information about when and where to vote. Many states prohibit false statements about candidates, endorsements, or ballot issues.
But growing concerns about other ways AI can distort elections has led to a number of new laws. While the bill has been introduced in Congress, experts say states are moving at an accelerating pace.
In the first six weeks of this year, lawmakers in 27 states introduced bills to regulate deepfakes in elections, according to the progressive advocacy group Public Citizen.
“There is tremendous momentum in states to address this issue,” said Public Citizen President Robert Wiseman. “We have bipartisan support…to recognize that there is no partisan interest in the ravages of deepfake fraud.”
Many state-level bills focus on transparency, requiring campaigns and candidates to include disclaimers on AI-generated media. Other measures include banning deepfakes within a certain period of time (such as 60 or 90 days before an election). Still others are specifically targeting AI-generated content in political ads.
These cautious approaches reflect the need to weigh potential harm to free speech.
“It's important to remember that the First Amendment does not generally prohibit lying, even if something is not true,” Weiner said. said. “Political advertising has no rules for telling the truth in advertising. Solutions need to be tailored to the problems the government has identified.”
How important a role deepfakes end up playing in the 2024 election will help determine the shape of further regulation, Weiner said.
Technology companies are having a similar impact. Meta, YouTube, and TikTok have started requiring AI content to be made public when posted. Meta announced Tuesday that it is working with OpenAI, Microsoft, Adobe and other companies to develop an industry-wide standard for AI-generated images that can be used to automatically trigger labels on its platform.
But Meta also came under fire this week from its own oversight board over its policy of banning what it calls “manipulated media.” The board of directors, to which Meta is funded through an independent trust, said the policy was “inconsistent” and had major loopholes, and called on the company to review the policy.
“As it stands, this policy makes little sense,” said Michael McConnell, co-chair of the board. “Although altered videos that show people saying things they are not saying are prohibited, posts depicting individuals doing things they are not doing are not prohibited. Videos created by AI Alarmingly, audio is one of the most powerful forms of election disinformation seen around the world. Fake is not mentioned. ”
Lawson said moves to put in place laws and guardrails to curb AI in elections are a good start, but won't stop determined bad actors.
He said voters, campaigns, lawmakers and technology platforms need to adapt and create not just laws but social norms around the use of AI.
“We need to get to a place where things like deepfakes are viewed almost the same way as spam: they’re a nuisance and they happen, but they don’t ruin our daily lives. No,” he said. “But the question is, did this election get us there?”
Copyright 2024 NPR. For more information, please visit https://www.npr.org.