META announced that it is working on ways to better detect and identify AI-generated images across Facebook, Instagram, and Threads in the lead-up to the 2024 election. Nick Clegg, Meta's global president, said the technology is currently in development and will notify users when an image they see in their feed was generated using AI. That's what it means.
Currently, Meta adds watermarks and metadata to images generated using Meta AI software. But now the company is looking to extend its capabilities to images produced by contemporaries such as Adobe (ADBE), Google (GOOG, GOOGL), Midjourney, Microsoft (MSFT), OpenAI, and Shutterstock (SSTK). is.
Clegg said Meta is collaborating with the Partnership on AI, a nonprofit group of academics, civil society experts, and media organizations committed to ensuring that AI has “positive outcomes for people and society.” The company is currently formulating standards that can be used. Identify AI images on the web.
Experts say AI image generation could trigger a tsunami of disinformation in the lead-up to elections. We're already seeing some of what's coming in real time. When former President Trump was arrested in New York in 2023, images of him apparently running from police began circulating on the web.
And while a supposed AI-generated image of an explosion outside the Pentagon leaked onto the internet last year caused a temporary drop in New York stock prices in 2023, officials said there was nothing out of the ordinary. .
Meta movement helps identify AI-generated images, but it cannot recognize AI-generated video or audio. To that end, Clegg said the company is adding features that can be used to label AI-generated video and audio that you share across Meta. If users don't comply, they could face penalties, he explained.
“Companies are starting to incorporate signals into their image generators, but they haven't started incorporating signals into the AI tools that produce audio and video at the same scale. “You can't attach it yet,” he said.
AI-generated audio and video are already being used to spread disinformation. In 2022, a deepfake video appeared on the web showing fake Ukrainian President Volodymyr Zelenskiy instructing the military to lay down their weapons. And in January, a deepfake audio recording of fake President Biden asking voters not to participate in the presidential primary was leaked in New Hampshire.
Clegg also pointed out that while Meta will be able to identify AI-generated images, there are still ways those markers can be manipulated or removed. To prevent that, the company's Fundamental AI Research (FAIR) team is working on ways to embed watermarks in images so that they cannot be removed during image generation, he said.
Generated AI content is still a new concept for many people on the web. The concept of creating images in Photoshop isn't new, but the ability to instantly generate large numbers of images to flood social media is.
The explicit, AI-generated images of Taylor Swift circulating across sites including X demonstrate how easy it is for such content to spread instantly across vast swaths of the internet. Meta's solution is just one tool in his vast array of options to combat a new generation of misinformation.
On Monday, Meta's oversight board criticized the company for the way it currently handles manipulated video content. The Board is “concerned about manipulative media policy in its current form, which is inconsistent, lacks a convincing justification, and does not aim to prevent specific harms, such as content destruction.” “We believe that the focus is inappropriately on how content is created, rather than on how content is created.” election process. ”
The commission told Meta that it would “begin labeling manipulated content, such as videos that have been altered by artificial intelligence (AI) or other means, when such content has the potential to cause harm.” I suggested that it should be done.
The board's remarks came in response to a video of President Biden that was altered without the use of AI and allowed to remain up because it did not violate the company's manipulated media policy. The group argues that Meta needs to add context to videos that have been manipulated by means other than AI.
daniel howley I'm the technology editor at Yahoo Finance. He has been covering the technology industry since his 2011. You can follow him on Twitter. @Daniel Howley.
Click here for the latest technology news impacting the stock market.
Read the latest financial and business news from Yahoo Finance