The European Union on Tuesday called on Facebook, TikTok and other big tech companies to use clear labels to crack down on deepfakes and other AI-generated content ahead of polls across Europe in June.
The recommendation is part of a raft of guidelines published by the European Commission under its landmark content law to help digital giants address the risk of elections containing disinformation.
The EU executive branch has launched a series of measures to crack down on big tech companies, particularly regarding content moderation.
Its biggest tool is the Digital Services Act (DSA), under which the coalition designates 22 digital platforms as “very large”, including Instagram, Snapchat, YouTube and X.
The excitement around artificial intelligence that has continued since the introduction of OpenAI's ChatGPT in late 2022 has coincided with growing EU concerns about the harms of the technology.
Advertisement – SCROLL TO CONTINUE
Brussels is particularly concerned about the influence of Russian “manipulation” and “disinformation” on the June 6-9 elections in the 27 EU member states.
In its new guidelines, the commission says the largest platforms must “assess and mitigate certain risks associated with AI, including clearly labeling AI-generated content (such as deepfakes). There is,” he said.
To reduce risk, the commission recommends that major platforms promote official information about the election and “reduce the monetization and dissemination of content that threatens the integrity of the electoral process.”
Advertisement – SCROLL TO CONTINUE
Thierry Breton, the EU's chief technical executive, said: “With today's guidelines, the DSA will do everything it can to ensure that platforms comply with their obligations and are not exploited to manipulate elections, while protecting freedom of expression.” “We are making the most of the tools available to us.”
The guidelines are not legally binding, but platforms must explain what other “equally effective” steps they are taking to limit the risk of non-compliance with the guidelines.
The EU could ask for more information and, if regulators find full compliance is not in place, it could hit companies with investigations that could lead to hefty fines.
Advertisement – SCROLL TO CONTINUE
The commission also said that under new guidelines, political ads “should be clearly labeled as such” before stricter laws on the issue come into force in 2025.
It also calls on platforms to put in place mechanisms to “mitigate the impact of incidents that could have a material impact on election results and turnout.”
The EU announced that it would conduct a “stress test” using related platforms in late April.
Advertisement – SCROLL TO CONTINUE
X has already been under investigation for content moderation since December.
And on March 14, the commission pressed Facebook, Instagram, TikTok and four other platforms to provide more information about how they combat AI risks to polling.
In the past few weeks, several companies, including Meta, have announced the outline of their plans.
TikTok announced further steps on Tuesday, including push notifications starting in April directing users to find more “trustworthy and authoritative” information about June voting.
TikTok has around 142 million monthly active users in the EU and is increasingly used as a source of political information among young people.