Network lifts off + News/Analysis

A black and white rocket lifts off against a blue/black background. A grey mail icon with '16' is visible in the rocket's circular window.

The Spectrum of Misinformation returns for 2026 with more news, analysis & practical advice for communicators.

In News digest: AI overviews "put lives at risk", decline porn racism, White House made me cry, false sunbed claims, testosterone jabs, Canadian AI harms, fake Venezuelan crowds, anti-Ulez extremism, Japanese deepfakes. Plus: Finns teach AI literacy, debunking a food delivery hoax, and immunising LLMs. Then discover What to expect in 2026, watch from mission control as our Network lifts off and tumble down the conspiracy rabbit hole in Book Club: I want to believe before catching up with What did I miss in 2025?

News digest

The Guardian reports mental health charity Mind is launching an inquiry into AI and health after an investigation revealed Google's AI overviews were providing dangerous medical advice. Mind's Sarah Hughes said AI overviews were giving vulnerable people:

“dangerously incorrect guidance on mental health... [including] advice that could prevent people from seeking treatment, reinforce stigma or discrimination and in the worst cases, put lives at risk.”

BBC investigates the rise of "decline porn", GenAI videos created by influencers falsely portraying London and other Western cities as overrun by crime and immigration, often using racist tropes. One influencer defends his videos saying they are intended to be funny, but adds:

"If people saw it and they immediately knew it was fake, then they would just scroll. The selling point of generative AI models is that they look real..."

CBC and other outlets cover the White House posted a misleading image of the arrest of a civil rights attorney altered to suggest she was in tears.

BBC finds hundreds of adverts on TikTok, Instagram and Facebook make dangerous false claims about the health benefits of sunbeds.

France 24 highlights social media ad campaigns are influencing men to get testosterone jabs they don't need, increasing the risk of side effects including infertility and cardiovascular problems.

Canadians following health advice from AI were 5 times more likely to experience harms than those who did not, according to an online survey of 5,000 people by The Canadian Medical Association

AP shows how Finland's national curriculum teaches children from age 3 how to spot fake news with newly added lessons on AI literacy.

CNBC reports AI-generated images purporting to show crowds in Venezuela celebrating after the US military removed President Maduro have been viewed millions of times on social media.

The Independent quotes the German health minister dismissing as "completely unfounded" RFK Jr's claims that German doctors were punished for not administering COVID-19 vaccinations.

The Guardian covers how a retired dishwasher engineer from Bexley was radicalised by conspiracy theories and disinformation in anti-Ulez Facebook groups to become a bomb-maker who blew up a camera.

NHK investigates AI deepfakes circulating around Japan's election including fake interviews and entire AI-generated news items.

Platformer debunks a GenAI-enabled hoax about food delivery firms rigging their apps against customers and delivery drivers.

In The Lancet, a comment piece argues that Large Language Models (LLMs) need to be 'immunised' against misinformation.

Reuters covers research showing LLMs are more likely to provide incorrect medical advice when exposed to misinformation that looks like it comes from an 'authoritative source', such as doctors' notes.

What to expect in 2026

Last time I reflected on lessons from misinformation in 2025 but what predictions can we make about the year ahead?

With upcoming general elections (eg Denmark, Brazil, New Zealand), presidential elections (eg Colombia, Bulgaria, Estonia), and UK local elections, expect GenAI deepfakes to once again rear their ugly heads.

Expect 'quick and dirty' deepfakes of candidates appearing to say offensive and outrageous things as well as sophisticated whole AI-generated news reports hard to distinguish from the real thing.

Also look out for the 'liar's dividend', familiar from the US, with politicians dismissing as 'fake news' true reports that show them in a bad light.

Even as measles outbreaks continue across the US and UK, expect misinformation attacks on vaccines to intensify with likely flashpoints including US authorities reconsidering ALL vaccine recommendations and a report into the failures of the UK's Vaccine Damage Payment Scheme.

Common vaccine misinformation attacks lines include: children get 'too many' vaccines (when it's precisely because children are most vulnerable that we vaccinate the young), that they are 'unnatural' (ignoring that vaccines stimulate our bodies' natural defences), or that parental choice on vaccines is being taken away (spoiler alert: it isn't).

Attacks on vaccines, some of the safest and most effective health interventions we have, are part of a broader attack on the very idea of public health itself. Misinformation narratives suggest individuals should 'take back control' of their health from 'experts' and 'authorities'.

Why? Because misinformers want to claim this authority for themselves, the better to manipulate and profit from their audiences.

Unfortunately 2026 is likely to provide further evidence the COVID-19 culture wars, around mask-wearing and vaccination for the 'common good', never went away. Counter-misinformation interventions need to get better at explaining how public health also benefits individuals.

And then there are the ongoing assaults on climate action.

The fossil fuel industry, and those with a vested interest in it, will continue to exploit libertarian and doomist misinformation narratives around net zero policies being too expensive, being useless, costing jobs.

These attacks are often deeply cynical or, as with the 15-minute city conspiracy theory, mired in some alternative reality. But they have been worryingly effective at influencing politicians and policy-makers.

I think climate researchers have to get better at producing and explaining evidence of the economic benefits of climate action to local communities and individuals, rather than appealing to some global moral mindset.

I predict that the most effective counter-misinformation interventions in 2026 will be those that put the personal and local above the universal and global, that work with local groups and trusted advocates to design and deliver interventions from the bottom up rather than the top down.

What 2025 showed is that the Age of Authority is over and that institutions that want to maintain public trust have to earn it. There's no room for complacency or just 'doing things the way we've always done'.

In 2026 we have to rethink the whole information ecosystem and our role in it. If we care about misinformation harms, how are we addressing people's common questions and concerns? How can we get accessible, relatable, high-quality information to the audiences that need it?

Nobody has all the answers, but I'm hopeful that this year we are at least starting to ask some of the right questions.

Network lifts off

This month at LSHTM we launched a new network to help members counter dangerous health misinformation in the UK.

I first floated the idea back in December 2024, realising no organisation can fight the many-headed beast of misinformation alone.

The network is an attempt to fill the 'communications gap' between academic research and the knowledge and skills needed to respond to health misinformation in the media, online and on social media.

Because the overall goal is not just to combat individual misinformation narratives that could have a negative impact on human health, but also to curate and create trustworthy 'healthy' information, we've called it the Health Information Integrity Network (HIIN).

Hosted by LSHTM, its members remain independent with their own unique voices. The plan is for it to be agile enough to respond to mutating misinformer tactics and prioritise practical support for 'first responders': academic experts and communications professionals on the frontline of the fight against harmful health misinformation attacks.

So what will it do?

The HIIN starts life as a mailing list we're inviting a selection of those who submitted an expression of interest in the network (including universities, health bodies, charities, and media and tech firms) to join. Members will be encouraged to use it to share information about health misinformation attacks affecting the UK and calls for members to collaborate on counter-misinformation campaigns on specific health topics.

The first call for collaboration is for a campaign to counter misinformation about common childhood vaccines. We're particularly interested in working with those with links to UK regions with low vaccine uptake.

It's early days but, if we can attract the funding and resources, we have some ambitious ideas for how this work could grow: everything from health misinformation alerts, to training, briefings and reports, developing new types of intervention, and maybe even courses.

With the start of the year a busy one in my business-as-usual role at LSHTM the last month has been a slog to get everything ready for launch [Reader - this is the main reason last month was Spectrum-less].

The launch coincides with what, I think it's safe to say, is a period of turmoil, even peril, for universities, health organisations, and the communications profession. A volatile media landscape, the decline of institutional social media, the rise of GenAI, the politicisation of health and science, all play into the hands of misinformers with an agenda.

In the face of daunting challenges like these the launch of a network like the HIIN seems like a small thing.

But as I've learned in the 18+ months of my misinformation journey so far, regularly wrestling with doomerism and impostor syndrome, taking each small step forward towards the next milestone is vital. It's the only way we can begin to understand the complex problem(s) of misinformation and what we can do to mitigate its harms.

Book Club: I want to believe

For those interested in why we believe things that aren't true, I can recommend Joe Pierre's new book False.

As a psychiatrist, Pierre comes at misinformation from a different perspective than many researchers even if much of the ground he covers, and some of his examples, will be familiar to regular readers.

He makes a good case that the lazy thinking conspiracy believers are 'deluded' distracts from the fact we are all vulnerable to false beliefs.

While paranoia might drive some belief he cites 'epistemic mistrust', the mistrust of authorities and officialdom of all kinds, as a major factor. Some of this mistrust may be rational and 'earned', through the misdeeds of governments or institutions, as he writes:

"Lack of transparency and orchestrated misdirection, even if well-intentioned for some greater good, is a surefire way to erode trust and open the door to belief in conspiracy theories and other types of misinformation." [p.100]

Rather than being arch-sceptics, conspiracy theorists are desperate to believe in something, echoing Fox Mulder's: I want to believe.

But instead of believing what they hear from authorities, they respond to the misinformer call of "just asking questions" and "do your own research" and, guided by confirmation bias and motivated reasoning, assemble narratives (not really theories) into 'evidence' that justifies their mistrustful worldview:

"the irony of conspiracy theories is... believers are so blinded by epistemic mistrust that they fall victim to disinformation without seeing the real conspiracy hiding right under their noses." [p.104]

So how can we avoid going down the false belief rabbit hole?

Pierre recommends the 'holy trinity' of truth detection: intellectual humility, cognitive flexibility, and analytical thinking. We also need to hold CEOs and politicians accountable and call out their falsehoods. One idea I liked was 'turncoat debating' in which participants have to present both sides of an argument and then conclude with their own synthesis.

If you're new to misinformation there are better places to start (eg Foolproof or The Psychology of Misinformation) but False brings fresh insights to the question of why we believe the things we do.

It made me think: The truth is out there, but maybe it needs better PR.

What did I miss in 2025?

Looking to catch-up? Check out all things last year in Review of 2025.