
In many ways, the world of misinformation research is robust. In the past decade the number of studies has grown, evaluating things like misinformation’s presence in health or politics content and how well people discern whether information is true or not.
But misinformation – especially on social media – remains difficult to study in many ways, limiting our abilities to fully understand how it impacts our decisions, behaviors and the world around us.
Several factors impede researchers’ ability to understand misinformation. Primarily, difficulties in accessing data and transparency with how social media platforms’ algorithms function limit what researchers can do. But some involve disagreements over definitions of what to count as misinformation and the complexity in studying human behavior resulting from exposure to misinformation.
“The biggest problem is the lack of social media data,” said Matthew Facciani, a social scientist whose research focuses on media literacy, misinformation, social networks, political polarization, identities, and artificial intelligence and one of NCC’s fellowship trainers.
“A lot of social media companies don’t share data with researchers or when they do it’s limited,” he said. “It makes it really difficult to even study. Lack of data, lack of transparency with social media companies. We don’t really understand how these algorithms work or what people are doing. It’s hard to even get a grasp on what people are sharing, regardless of whether it’s true or false.”
A 2024 Science article digging into how scientists grapple with studying misinformation noted that even when academic researchers have collaborated with social media companies, the quality of studies can come into question.
The article detailed a case where a group of scientists worked with Meta on a set of studies during the 2020 U.S. presidential election. In one, scientists manipulated the feeds of 20,000 Facebook and Instagram users to reduce the misinformation they saw and determine whether users who saw the altered feeds were less polarized on issues than those who continued to view content based on Meta’s standard algorithm. But other scientists pointed out that the study’s integrity may have been compromised because Meta also tweaked its default algorithm during the experiment.
Zicheng Cheng, an assistant professor at the University of Arizona who has conducted several recent studies related to information on TikTok, has used a database of news videos provided to researchers by the company for some of her studies. But said that the pace of academic research makes it difficult to understand the full scope of misinformation because platforms remove videos so quickly.
It makes it difficult for researchers to follow the game of whack-a-mole platforms play with accounts that put out misinformation.
“One challenge is that TikTok is quick at taking those misinformation videos down,” Cheng said. “I collected my database around 2023. Now when we look back to the previous data, many of the accounts have already disappeared, a big portion banned by the platform. If you try to track those accounts, they’ll have a new account. How can you know how many people misinformation videos reach before the platform takes them down?”
It’s also difficult to fully understand the role misinformation plays in human behavior. What does seeing misinformation lead a person to do and how could that decision or action be harmful?
“People are messy to study and they have a lot of noise,” said Facciani. “There is a lot of error in the data just because people are complex. When you’re studying something complex like people, you have to be careful to ensure what you’re studying is an effect and not just an anomaly. You need to not only be careful in designing the study, but also replicating it to make sure it wasn’t just by chance.”
And replicating studies is not something that researchers typically have incentives or are rewarded to do, he said.
Much misinformation research doesn’t even touch on how beliefs from misinformation translate to action. A 2023 review from the Harvard Kennedy School of more than 750 misinformation studies found less than 1% of the studies looked at how participants later behaved and not just how their attitudes or beliefs changed.
A lot of the studies that Cheng does also require people to opt-in to them. For example, college students sign up to be a part of the study and that might mean their media literacy skills are innately different than someone else’s.
It’s also common in the misinformation field for scholars to do experimental studies, where they use mock social media posts and ask people if they can discern which are factual and which aren’t, Cheng said.
“But it’s lacking the real world implication,” she said.