How weak? The art of weakening attacks

An orange pipette hovers above a petri dish containing letters spelling out 'fake news'.

When looking to respond to a misinformation attack a key challenge is how to talk about the false claims without spreading them.

If the claim is that "X causes Y" it might seem obvious to say something direct such as "X does not cause Y".

But simple phrasing like this risks bringing the claim to the attention of those who have never heard of it (and then start googling), despite the "does not" it also links the two things in people's minds, and relies on people trusting your opinion.

Approaches such as inoculation science and prebunking (see Learning the language) treat the misinformation like a virus and a counter-misinformation intervention like a vaccine.

Just as with a vaccine made from a virus the virus has to be weakened/killed first to reduce the risk of infection, so the misinformation virus (fake news) in any intervention needs to be weakened to reduce the risk of it reinfecting others. Prebunking, for example, exposes people to weakened doses of the attack.

It's a neat idea, but when you're faced with responding to a piece of fake news what does 'weaken' really mean? How can you make an attack weaker? How weak is weak enough?

As suggested by my title, I think this is more of an art than a science and calls for the skills and judgement of professional communicators.

My approach is to think of weakening as both diluting and breaking down the misinformation. So, for example, if the claim is "the MMR vaccine causes autism" I would start by saying "the false claim that the MMR vaccine causes autism" as a way of forewarning within the sentence a claim is not to be trusted.

I'd then dilute by being less specific about the vaccine "the false claim that measles vaccines cause autism" and then break down the specific claim further by changing to "the false claim that measles vaccines are linked to autism".

With a claim such as “using birth control like the pill or IUDs makes it harder for most women to get pregnant after they stop using them" (from this poll) it's probably a good idea to break the link between birth control and harder and also dilute the reference to the specific method.

So the weakened form might be: "False claims that some contraceptives, such as the pill and IUDs, affect the ability of women to get pregnant after they stop taking them aren't supported by the evidence."

This could then open the way to sharing a strong counter-argument: "A review of 22 scientific studies showed over 83% of women who stopped using contraception were able to get pregnant within a year."

To give another example take the claim: “Climate scientists are in the pocket of elites and only produce studies favourable to them as a result" (from this report). Here "in the pocket of" immediately jumps out as charged language to avoid. You could replace this with the more neutral "are biased" but even if you start with "false claim" it still introduces the idea of bias.

In this case I would dilute the misinformation further, while still trying to address the main allegation, so changing to something like: "false claims that research by climate scientists is not independent of funders and international organisations".

At each stage it's worth questioning: is this weak enough? Is this too weak?

I'd argue, while weakened, the claim must still be recognisable, so that if after reading the weakened version someone is exposed to the full claim they would recall the counter-arguments and evidence shared in the intervention.

Remember, this weakened dose of the attack is only one part of the intervention, the part that forewarns people about a fake claim they may encounter, the intervention will also need to include evidence, strong counter-arguments, and credible/trusted sources to persuade people the news is fake.

So once you have your weakened attack how should you use it?

One model is the truth sandwich: at the beginning you have facts and expert sources, in the middle you share the attack in a weakened form, then at the end of the sandwich you finish with facts and credible alternative explanations.

I used this model for a recent story about research showing very gradual waning of protection (0.04% each year) from the MMR vaccine. The concern was that antivax misinformers would jump on it as an excuse to resurrect previous fake news about MMR being ineffective or unsafe.

My approach was to work with the researchers on a quote for the press release that addressed the concerns behind these claims while only alluding to them in a very diluted form, as I felt the myths around MMR were so well-known and recognisable people would easily make the connection. I then sandwiched this between persuasive facts and figures and highlighted scientific consensus:

"Before routine measles vaccination began, England and Wales saw large outbreaks with peaks of 600,000 cases in some years, by contrast from 2010-2024, the most reported cases in any one year was under 3,000.
“It’s important to emphasise that the patterns we see in the data are only there because outbreaks have occurred as a result of declines in vaccine coverage.
“If there were no outbreaks, this small amount of waning would not show up in any data.
“The key issue here is coverage, not the effectiveness of the vaccine.
“The scientific consensus is that measles vaccines are safe and have been highly effective in preventing outbreaks.”

Interestingly, while in our release this was the quote that was least specific about the study results it was one PA included in their story (see The Independent).

Fortunately, the misinformation attacks we were expecting did not materialise. While I can't claim our truth sandwich quote with the weakened attack was responsible, it certainly meant we were ready to use this intervention, either in a quote or broken down into key messages for social media, should attacks occur.

There are circumstances, as in media interviews where spokespeople are asked directly to respond to specific misinformation, where weakening the attack may not be an option or could cause confusion.

But, as a general principle for counter-misinformation work, I think it makes a lot of sense to, wherever possible, only refer to fake news in a weakened form to reduce the risk of spreading the very misinformation you are trying to contain.