Misinfo-proof research + News/Analysis

The Spectrum of Misinformation returns with misinformation news, analysis & practical advice for communicators.
In News digest, read about anti-medicine conspiracies, climate disinformation threatening business, criticisms of the UK Online Safety Act, Danish moves to outlaw deepfakes, AI factchecking on X, the impacts of 'weather weapon' conspiracies, Asia fake news networks, calls for Australia to tackle media/digital literacy, BBC using GenAI to adapt articles. Then learn to avoid The language trap, discover top tips on Misinformation-proofing research, before wrapping-up with More of The Spectrum.
News digest
BBC investigates the disturbing story of 23 year-old Paloma Shemirani who died in 2024 after refusing chemotherapy, with her brothers blaming her death on anti-medicine conspiracy theories.
FullFact responds to a UK Science, Innovation and Technology Committee report saying the Online Safety Act doesn't go far enough and wouldn't have prevented the spreading of misinformation that led to the Southport riots. Also see The Guardian and Telegraph.
Forbes argues businesses need to take action on climate disinformation as it threatens societies and economies.
The Guardian covers an Ipie review suggesting misinformation spread by fossil fuel firms risks turning the climate crisis into a catastrophe.
Independent reports the Danish government is considering making it illegal to spread deepfakes. The proposals are said to 'allow for' parodies and satire but it's unclear how this would work.
Al Jazeera looks at how X users are turning to resident AI Grok to fact-check posts but finding it often provides inaccurate or fabricated answers. Alex Mahadevan from the Poynter Institute said:
X is keeping people locked into a misinformation echo chamber, in which they’re asking a tool known for hallucinating, that has promoted racist conspiracy theories, to fact-check for them
The Guardian reports criticisms of X's plans to use AI to draft fact-checking notes on the platform saying the system is open to abuse.
WIRED investigates how conspiracy theories about a 'weather weapon' causing the recent Texas floods have led to death threats against cloud-seeding firms and attacks on weather radar systems.
Charlie Brinkhurst-Cuff writes about being diagnosed with polycystic ovary syndrome and the myths and misinformation surrounding PCOS.
A network of Facebook and Youtube channels directing to a bogus news website is profiting off disinformation about clashes between the Philippines and China in the South China Sea, France 24 reports.
ABC explores a report from University of Canberra suggesting Australians are the most worried globally about misinformation and that the country 'urgently needs' a media and digital literacy campaign.
BBC has announced a pilot of using GenAI to create news summaries and adapt local stories to their house style.
A new Ofcom discussion paper on deepfakes says attribution measures (eg watermarks), while useful, can be removed or manipulated and need to be combined with other interventions.
The language trap
Source checkers NewsGuard recently announced they're retiring 'misinformation' and 'disinformation' in favour of more specific labels.
While I disagree with their argument that one person's misinformation is another's information (cues such as negative emotional language, presenting opinion as fact, scapegoating, impersonation, and conspiracy thinking can be used to spot misinformation with 80% accuracy) I can understand the move.
They cite how terms such as 'disinformation' have been politicised and misused by misinformers and 'It’s no longer enough to call something fake — because those on all sides of the political divide use the term so avidly and casually'. Instead they will turn to language that's:
harder to hijack, and more specific. We will describe what a piece of content actually does, such as whether it fabricates facts, distorts real events, or impersonates legitimate sources. We’ll explain whether a claim is explicitly false, AI-generated, unsubstantiated, or manipulated.
They also argue that: 'A simple phrase like “false claim” is more powerful and precise than “misinformation” and “disinformation,”'.
I think misinformation and disinformation remain useful umbrella terms for talking about the broad topic/range of attacks. But I agree when explaining specific attacks to public audiences this vague language no longer works and that saying 'false' or 'unproven' or 'misleading' claims, or calling out fabrication, manipulation and impersonation for what it is, is much more effective.
Misinformation-proofing research
Last year I was asked: can we make research misinformation-proof?
It's a question I've been thinking about ever since and a few weeks ago gave my first WIP advice about it in a training session.
While it's impossible to totally protect your research from misinformers determined to misrepresent, distort or misuse it, you can make it more resilient to misinformation attacks.
Prepping for this session, I realised how much following best practice in communications and media work hardens research against attacks.
Here are some of my tips:
Explain, explain, explain: a common mistake is to think that 'the evidence speaks for itself' but people often wildly overestimate how much audiences know about (or are motivated to understand) a topic.
Starting with the assumption that whoever is reading/listening is intelligent but has little prior knowledge of your research area is good comms practice as well as protective against misinformation attacks.
I'd also stress the need to build a strong narrative. This is something misinformers are often very good at. As you explain what you investigated/found focus on 'story-telling' moments - why it matters, how it relates to real-world problems or solutions & human stories - and give helpful context alongside your findings. Telling the full story of your research (limitations/problems and all!) leaves less room for dangerous counter-narratives.
Factual, neutral tone: when dealing with emotive issues it's tempting to slip into emotive language. But this comes with risks, including that this kind of language (which is often a hallmark of misinformation) will be taken for bias or politically-motivated advocacy. I always recommend keeping the tone of explanatory copy factual and neutral with emotive language only used in attributed quotes or comments.
Conflicts of interest (COI) [AKA 'competing interests']: Since the pandemic scrutiny of conflicts of interest in research has intensified with journalists and campaigners looking to seize on any undeclared potential COIs as evidence of bias or misconduct. That's why I talk about 'maximal transparency' when it comes to COIs. Don't think it's a COI but think some bad faith actor might say it is? Declare now to protect against attacks later.
There are other lessons to learn from thinking: when the misinformation attack occurs what will you wish you'd done earlier?
One of these is explaining just as clearly what your findings don't show or should not be used as evidence of as what they do/should. I call this a "What we're not saying is..." comment. It's good practice to weave this sort of line into media stories and interviews where you know there's an unhelpful media narrative that your findings could be accidentally or deliberately misinterpreted to fit.
There are plenty of other things that you can do to help, such as date-stamping your findings, including quotes from advocates your audience trusts, and linking to credible sources.
As I highlighted in the session, the move towards an open access publishing model, while it has many advantages for science, has left research more vulnerable to misinformation attacks.
Research papers are no longer in a (pay)walled garden where they're only read by other scientists. We're now seeing misinformers trawl open access papers to cherry-pick findings (even just graphs!) and present them, often missing vital context or explanation, as supporting their false or misleading narratives.
It's a huge problem, with the audiences misinformers share this content with ill-equipped or unmotivated to interrogate the papers themselves and so taking the misinformation about real evidence at face value.
In my view the rise of open access + misinformers + social media turns every research paper into a communication to a much broader audience and authors and publishers need to adapt to this new reality.
There's no magic bullet but ensuring all communications about your research are as accessible, relatable, and useful for a wider (non-scientific) audience as possible will make it more resilient.
Ultimately, what if you explained your research in a more public/media-friendly way but it was never attacked?
I'd argue that improving the way you communicate about publicly-funded research is in itself a public good.
More (or less?) of The Spectrum
Missed my eventful last newsletter? You can catch-up here.
Avid readers may have noticed less frequent editions of The Spectrum. I'm now aiming to produce the full newsletter monthly (instead of fortnightly) but with occasional 'one-shot' special editions.
This is to ensure that, with my other commitments, I have enough time to research, think and keep the quality high - I really value your time and want to ensure it's always worth reading!