From honest mistakes to outright lies: The Wakefield Scale of Misinformation

What do you picture when you hear the word “misinformation?” An influencer hawking “healing” crystals? Oil companies diluting the public’s understanding of climate change? Maybe you picture your friend who is convinced sunscreen causes cancer.

In popular consciousness, there is a tendency to focus on deliberate examples of misinformation, where someone intentionally says something untrue. There is also academic interest in misinformation rooted in cognitive biases, echo chambers and identity. It has become clear to me as a science communicator and content creator that these two mechanisms don’t explain every instance of misinformation. 

Most creators don’t want audiences to be walking away from content with the wrong information. But I know that I’m not lying. And so if something breaks down in the transfer of information, I need a way to pinpoint why. I wanted to create a way of measuring what the communicator was trying to do when information goes wrong.

Inspired by Tom Ridgewell’s Somerton Scale of Plagiarism, I created the Wakefield Scale of Misinformation. The scale, named for Andrew Wakefield, a discredited former doctor credited with launching the modern anti-vax movement, takes several factors into account:

  • Is the communicator trying to deceive on purpose?
  • Is the communicator taking steps to verify the information they are sharing?
  • How willing is the communicator to predict and correct for audience misunderstandings?

The Wakefield scale is meant to fill a gap in how we analyze misinformation. It focuses on the intent of the communicator, not the severity of the claim or how “wrong” the claims are. Someone who lies about their age on a dating profile would rank higher than someone who claims COVID is caused by 5G towers and genuinely believes it. Knowing the impact of misinformation doesn’t always tell us how it happened or how to fix it next time. Both are important, but only intent gives creators something we can actually adjust.

The scale is also meant to broaden how we think about misinformation. At first glance, some of the entries on this list look more like miscommunication than misinformation. But when we focus solely on whether or not a piece of information is correct, we lose sight of the larger picture: that we want audiences to come away from our work with information that is accurate. If we take steps to mitigate misunderstandings, we can prevent misinformation in the future.

The barrier to entry to create media these days is extremely low. Anyone can casually whip out their phone and make a TikTok. Anyone can write a Bluesky thread that goes viral. And everyone can make mistakes. In making this scale, my hope is that, as communicators, we can approach these mistakes with kindness and be more open to recognizing and correcting them in our own work.

Wakefield Scale of Misinformation

0 – No Misinformation

Rarely will there be a piece of communication that perfectly imparts its full intended meaning to its audience. Therefore a zero counts for anything that is “good enough.” The communicator has no intention to mislead and comprehensively gets their message across.

Example: Wikipedia pages that are comprehensive, thoroughly fact-checked, and actively community-edited

1 – Simplification for Understanding

When explaining complex topics to a lay audience, it is often necessary to simplify information so that audiences can follow the core idea. This may appear as stripping away certain details or relying on metaphor. The communicator is not trying to deceive. They are merely making a topic more digestible.

Examples: Scientific models; Historical timelines

2 – Assuming Prior Knowledge

It is easy to assume that your audience is bringing appropriate context to your work. However, that is rarely the case. In this category, the mismatch between the communicator and the audience changes how the audience interprets the message, and can result in unintended takeaways.

Example: Using insider jargon that has a different meaning when used colloquially (“theory” in a scientific context)

3 – Assuming Full Participation

In today’s attention economy, it is rare that an audience will fully engage with an entire work. They’ll internalize a headline without reading the article. They’ll listen to a podcast while washing the dishes and miss important points. They’ll get bored and leave a 30 minute video after 30 seconds. When a communicator assumes an audience is taking in every part of the message, they may inadvertently hide necessary information.

Examples: Clickbait headlines; Putting important clarifications in footnotes

4 – Misspeaking

Sometimes, people have the right information in mind, but simply say the wrong thing by accident. A slip of the tongue will happen occasionally, especially in a live setting like a podcast interview or Twitch stream. For pre-recorded or edited work, ideally these instances are caught and corrected. 

Example: Momentarily referring to the wrong historical figure during a live interview 

5 – Misremembering

Memory is fallible. When you rely too heavily on memory, details fade, facts blend, and context disappears. Even if a communicator once knew the correct information, the version they repeat now may be distorted. Fact-checking matters, even for things we think we “know.”

Example: Sharing findings from a study, but forgetting – and therefore, omitting – the caveats that made it true

6 – Overconfidence

Sometimes, people are just wrong. They come to an incorrect conclusion or repeat incorrect statements without fact-checking. In this category, the communicator has no intention to deceive, but there is an element of carelessness by confidently sharing information without verifying that information.

Example: Repeating common myths/misconceptions (“Carrots help you see in the dark”)

7 – Reckless Endangerment

Sometimes, people are confidently wrong, even when there is overwhelming evidence to the contrary. Our biases affect how we take in information, and therefore, what information we share. Confirmation bias will lead someone to ignore Google results that don’t align with their worldview. Desirability bias – also called wishful thinking – will lead someone to overvalue a flimsy study if it conveniently supports an outcome they want to be true. As Matthew Facciani writes, identity bias will lead someone to trust claims that reinforce their group identity and reject those that threaten it. In this tier of the scale, a communicator is not intentionally spreading misinformation, but they’re failing to interrogate their own assumptions.

Example: Sharing ineffective alternative health remedies, while genuinely believing they will help. 

8 – Uninterested in Truth

Every news outlet, brand, and independent creator is competing in the attention economy. As a result, many actors in this space care only about whether their work will get clicks or watch time, regardless of the veracity of the content itself. It’s not so much that they intend to deceive, it is that they intend to capture attention, and deceiving or sensationalizing is one method of doing so.

Examples: AI “slop” content that goes un-fact-checked; AI-generated images presented as reality; Intentionally misleading YouTube thumbnails

9 – Deliberate Misinformation

Some people lie for their own benefit. In this category, communicators intentionally say something they know to be untrue to dodge responsibility or serve their own interest at their audience’s expense.

Examples: Crypto schemes advertised as a good investment, when the plan had always been to rug pull; Influencer marketing that does not disclose sponsored content

10 – Deliberate Disinformation

When industries, countries, political movements, or other major institutions work together to influence the public’s understanding of a topic, it erodes trust in shared reality. The intention is not just to deceive, but to make audiences question the validity of trustworthy sources and control the media landscape.

Examples: Oil companies sowing doubt about the climate crisis; Government restrictions on independent reporting

Creators of all kinds are liable to make mistakes. This scale can be a tool to help us notice those mistakes sooner and correct them with generosity. For a detailed walkthrough of the Wakefield Scale, my video essay A Reasonably Thorough Field Guide to Misinformation offers more context, deeper examples, and a healthy amount of YouTube-grade silliness.

Recent Posts