“Misinformation” and “disinformation” are often conflated together. They are not the same, but they are very related.
Suppose you hear that Christmas falls on December 23rd this year. If someone says something thinking it's true, it's considered false information.
However, if spread with the intent to deceive, it becomes disinformation and can be easily amplified by people in the first group without realizing it.
Artificial intelligence-generated audio and video will be ubiquitous this election season. therefore,[共有]Before you click, know that the technology used to create persuasive but often false content is advancing much faster than you think.
Marketplace's Lily Jamali spoke to Joan Donovan, a longtime researcher of misinformation and now a journalism professor at Boston University, to find out more.
Below is an edited transcript of their conversation.
Joan Donovan: As for the technology itself, what it shows is that any politician can be realistically portrayed as saying or doing something that never happened. And then there's the New Hampshire robocall example, where it was already illegal to misrepresent someone's voice, especially a politician, through a robocall. However, the regulations now have to be amended to include AI-based impersonation.
Lily Jamali: And are we just becoming much more prolific than ever because technology has advanced so much?
Donovan: I don't know. One of the things that my research is starting to look into in relation to deepfakes is, is a small number of actors creating a large amount of deepfake video and deepfake audio? Offering AI impersonation as a service It has certainly become easier for people to sign up with different companies. There are currently more than a dozen such companies. And it's becoming increasingly difficult to tell the difference. And as we begin to research and understand the impact it has on politics, we are well aware of the fact that impersonating a political candidate is illegal and can have dire consequences, even if it's just a prank. I think we need to recognize that.
Jamari: We are witnessing a wave of layoffs in the technology sector, and they are continuing as we speak. And among those laid off were many moderators from major platforms, as well as people losing their jobs in these sectors of the tech industry. But some of these companies have pledged to do more to crack down on election misinformation. So, what do you think about their efforts so far? I would appreciate it if you could tell me something specific. We see a lot of promise, but we don't know if we're seeing any results.
Donovan: We've been waiting 10 years for AI moderation. It remains to be seen what these technology companies will do next. There have been several pledges to remove confusing AI content about candidates. They promised to label AI-generated content. But what we really need is a commission to oversee and penalize people when they are able to spread disinformation into the mainstream. I often think of this as a true cost issue. That is, how much does it cost the journalism industry to clean up after a major disinformation incident? And how much effort must be put into these investigations in order to at least put some truth back in the hands of the public? Social media companies therefore need timely, accurate local information to help people engage. I think we really need to work harder to ensure that we're part of the flow of information that we do. Otherwise, disinformation tends to flourish in that vacuum.
Jamari: Are there any signs that AI technology is reaching a stage where it can resolve and detect misinformation and disinformation?
Donovan: No, every once in a while I get an email with a tip to point a disinformation laser at this topic, and I just don't think that's going to work. So one of the things that we have to consider is that human intelligence is very important. And it doesn't seem like there's been a surge in customers looking for bad essays and increasingly fantastical images. I think it's a neat trick, but at the same time I think it takes such a sacrifice to bring true information to the surface, and truth is actually a human process. Truth has always been a human process. We may invent devices like thermometers that tell us when water is boiling, but AI large-scale language models trained on his 10 years of Reddit data and Wikipedia's corpus. If so, there are no parameters to judge the truth. . It would be interesting if they could build an expert system that was actually good at providing facts, but that's not what's happening here. They are in the race for generalized artificial intelligence. And what they're facing is the fact that human speech can be very confusing. So I think it's going to take a long time for us to take seriously the fact that these systems have nothing to do with the truth.
Jamari: But there's no doubt that people are using them. In other words, the data shared by tech companies trying to get as much attention as possible to their AI chatbots says that many of us, tens of millions of people, are using chatbots. But anecdotally, it confirms that: good.
Donovan: I'm not very confident. Because these are new products on the market, and we don't know if their use will happen just because people are trying to figure out how to make it effective. The data we have is about stupidity, right? What's gone wrong and what's broken with these technologies. And we have a lot of hype, and the idea that the future could be about a technological dystopia, where AI could learn how to control and run itself, and then humans could be wiped out. There may even be some hype towards it. This kind of fear is our fear that these technologies are incredibly powerful and are actually repetitive and this is something that has developed over years and years of people going online. It also affects my thinking. So a year from now, I don't know if people will still be using these products and if they've come up with a business model that makes sense. But it reminds me of the early days of social media. There, they were building tools in search of consumers. And right now, it is difficult to determine who those consumers are and for what purposes they will use AI. But when it comes to elections and disinformation, we know that if we can get AI to repeat certain theories and ideas, humans won't have to come up with so much disinformation. It could have a negative impact on elections.
Learn more
There's a lot of time between now and Election Day on November 5th. Whether the misleading information is accidental or intentional, there is plenty of time for more misleading information to circulate.
That's why we're launching a series called “Decoding Democracy,” exploring election disinformation, the technological advances that have made it more persuasive, and tips for navigating it all.
Our first episode will be released on our YouTube channel on Tuesday, the party's primary Super Tuesday.