- Major technology companies have announced an agreement to reduce the risk that artificial intelligence will disrupt the 2024 US election process.
- Meta and Microsoft have already implemented measures such as detecting and labeling AI-generated content.
Technology industry leaders including OpenAI, Microsoft, TikTok, X, Meta, Amazon, and Google have signed a new agreement aimed at minimizing the risks of artificial intelligence, “Combat the Deceptive Use of AI in the 2024 Elections.” A technology agreement was announced. Interfering with the 2024 U.S. election.
Companies participating in this agreement have agreed to the following commitments:
- Developing and implementing technology, including open source tools, to reduce the risks associated with deceptive AI election content.
- Evaluate your model for the risks it may pose regarding deceptive AI election content.
- Detect distribution of such content on the platform.
- Appropriately address malicious content detected on the platform.
- Develop cross-industry resilience against deceptive AI election content.
- Provide transparency to the public about how companies address this issue.
- Collaboration with diverse global civil society organizations and academics.
- Initiatives to increase public awareness, media literacy and resilience.
According to the agreement, these companies are expected to establish controls over AI-generated content such as audio, video, and images that can mislead voters, election officials, and candidates. This includes efforts such as detecting and labeling content generated and modified by AI. However, this Agreement does not contain any prohibitions on the use or distribution of such content.
see next: Malicious Intent: Microsoft and OpenAI Identify APT Group Weaponizing GenAI LLM
The agreement was first announced at the Munich Security Conference, which includes intelligence agencies, heads of state, diplomats and military officials. This is a voluntary initiative by technology companies to develop and use technology to detect and mark content created by AI and evaluate software for potential abuse.
Although the agreement brought agreement among a large number of organizations, critics say it lacks enforceable measures. This development highlights concerns about the misuse of artificial intelligence, particularly deepfakes that imitate real politicians and celebrities, and that these deepfakes could be the work of malicious nation-state actors around the world. there is. The effectiveness of this agreement in curbing disinformation through AI vehicles will become clearer in the coming months.
Tell us what best practices does your organization use for AI applications? linkedin, Xor Facebook. We look forward to hearing from you!
Image source: Shutterstock