Misinformation can harm people's health as they find and act on information designed to deceive and mislead people online. To make matters worse, artificial intelligence-generated misinformation is becoming increasingly widespread and difficult to detect, further exacerbating the negative effects of human-generated misinformation.
And racial groups with already vulnerable health conditions, such as Blacks and Hispanics, are most likely to be harmed, especially as AI-generated misinformation is created to implicate and mislead them. expensive.
All this seems to be emerging from a growing body of research. What's less clear is what can be done about it.
Wenbo Li, assistant professor of science communication at Stony Brook School of Communication and Journalism, is doing just that, thanks to a seed grant from the university's Office of the Vice President for Research.
“Vulnerable and minority populations too often face disparate and negative health outcomes, and artificial intelligence and misinformation are exacerbating the situation,” SoCJ academics said. said Laura Lindenfeld, department head and executive director of the Alda Communication Science Center. “Research like Wenbo helps us better understand how these individuals and groups understand and interact with online health and science information. As researchers in science communication, we also help develop strategies and tools to reach and support them.'' As science communication researchers, we empower people to make choices that support their wants, needs, and goals. We must play an active role in this. ”
Lee's research consists of two phases.
First, we conduct research to better understand respondents' reactions to AI-generated misinformation and the impact of social and personal influences on public understanding of scientific and medical issues. .
Second, he plans to devise and test several interventions to help people reflect on the impact of misinformation in different ways. Starting in the second phase, Lee said he hopes to learn effective strategies to help people recognize potential misinformation and learn how to look for accurate information on their own.
“AI-generated misinformation is here to stay, and we already recognize that it can negatively impact the health of vulnerable populations,” Lee said. “This seed grant will help people understand how to create and share effective tools to combat misinformation and promote more effective and inclusive science and health communication and advocacy, and further It helps me prepare for research.”
The new study builds on Lee's growing body of research examining the social impact of artificial intelligence and social and online media.