(TNS) — Perhaps never before has a powerful technology posed such a huge regulatory challenge for the U.S. government. Ahead of the state's January primary, New Hampshire Democrats received a robocall that played an AI-generated deepfake audio recording of President Joe Biden and advised them not to vote. For example, imagine political deepfakes inciting Americans to violence. This scenario comes to mind in light of a new study from New York University that describes the distribution of false, hateful, or violent content on social media as the biggest digital risk to the 2024 election. It's not that difficult.
Both of us have helped formulate and enforce some of the most important social media decisions in modern history, including banning revenge porn on Reddit and banning Trump on Twitter. So we've seen firsthand how well it works for social media companies to rely entirely on self-regulation to moderate content.
The verdict: “Not good at all.”
Little-regulated social media is full of harmful content and has already contributed to the attempted insurrection at the U.S. Capitol on January 6, 2021 and the attempted coup in Brazil on January 8, 2023. It becomes. The industry, the Supreme Court and Congress have failed to address these issues head-on, and enigmatic CEOs are trying to make drastic changes in their companies. Widespread access to new and increasingly sophisticated technology for creating realistic deepfakes, such as AI-generated Taylor Swift fake porn, will make it easier to spread ducks.
The current state of social media companies in the United States is similar to the unregulated airline industry. Imagine what would happen if you didn't track flight times and delays, or record crashes and investigate why they happened. Imagine if the presence of rogue pilots and passengers were never discovered and those individuals were not blacklisted from future flights. Airlines will have less understanding of what needs to be done and where the problems are. They would also have less responsibility. The lack of standards and metrics in the social media industry to track safety and harm is forcing us into a race to the bottom.
An agency should be created to regulate American technology companies, similar to the National Transportation Safety Board and the Federal Aviation Administration. Congress could create an independent authority responsible for establishing and enforcing basic safety and privacy rules for social media companies. To ensure compliance, government agencies must have access to relevant company information and documents and the power to hold noncompliant companies accountable. If things go wrong, the agency should have the power to investigate what happened, just as the Transportation Commission was able to do after the recent Boeing accident.
Curbing the harms of social media is a difficult task. But we have to start somewhere, and any attempt to ban an already highly influential platform, as some US lawmakers are trying to do with TikTok, is a never-ending game of whack-a-mole. I'm just playing a game.
The platform can track the number of accounts removed, the number of posts removed, and the reasons those actions were taken. It should also be feasible to build a company-wide database of hidden but traceable device IDs and IP addresses for phones used to commit privacy, security, and other regulation violations. This also includes links to the posts and activities on which the decision was based. Catalog people and devices.
Companies should also share details of how algorithms are used to moderate content and safeguards to avoid bias (research shows, for example, that automated detection of hate speech have been shown to exhibit racial bias and potentially amplify race-based harm). At a minimum, companies will be prohibited from accepting payments from terrorist groups for the purpose of verifying social media accounts, as Company X (formerly Twitter) confirmed to the Tech Transparency Project.
People tend to forget how much content is already being removed on social media, from child pornography bans to spam filters to the suspension of personal accounts, such as the one that tracked Elon Musk's private jet. Regulating these private companies to prevent harassment, harmful data sharing, and misinformation is necessary and natural to enhance user safety, privacy, and experience.
Protecting user privacy and safety requires understanding how social media companies function, how their current policies were created, and how content moderation decisions have historically been made and enforced. Research and insight into what has happened is needed. Safety team members perform the important work of content moderation and hold important internal knowledge, but companies like Amazon, Twitter, and Google have recently scaled back their work. These layoffs add to the growing number of people pursuing technology careers who feel uncertain in the private technology sector, and who are equipped with the skills and knowledge to tackle these issues. Many individuals remain in the job market. They can be adopted by new institutions to create practical and effective solutions.
Technology regulation is a rare issue with bipartisan support. And in 2018, Congress created an agency to protect government cybersecurity. New regulatory bodies can and should be created to counter threats from both legacy and emerging technologies from domestic and foreign companies. If we don't, we'll just be left with one social media disaster after another.
© 2024 Los Angeles Times. Distributed by Tribune Content Agency, LLC.