Integrity matters + News/analysis

Black outline of a lighthouse against an orange background with a mail icon labelled 17 in the light.

The Spectrum of Misinformation returns for 2026 with more news, analysis & practical advice for communicators.

In News digest: ADHD claims denied, humans not LLMs to blame for bombings, sayonara Sora, Indian fuel panic, Germany's deepfake porn scandal, Australia at risk from climate misinformation, Canadian conspiracies, ChatGPT health fails, a misleading State of the Union. Plus: meet the fake rapper selling Brits real hate, mental health misinformation, TikTok top for fake news in Europe. Then learn about Navigating information integrity, get a UK policy update before discovering if you've Missed much?

News digest

The Independent features 32 experts disputing Wes Streeting's claim ADHD is over diagnosed in the UK, with a study suggesting the condition may be under diagnosed and better support and treatment is needed.

The Guardian covers how LLMs were blamed for the US bombing of an Iranian school that killed almost 200 young girls, when human decisions were likely at fault, with those responsible dodging accountability:

"Calling it an “AI problem” gives those decisions, and those people, a place to hide."

NBC News reports OpenAI has announced it will shut down its Sora AI video generation app. The launch of Sora 2 last year alarmed many with its ability to generate realistic deepfake videos.

Euronews features UEA analysis suggesting over half of social media posts on mental health and neurodivergence contain misinformation.

Economic Times reports the Indian government warning against a “deliberate, coordinated misinformation campaign” aimed at causing panic by falsely claiming shortages of petrol and LPG.

BBC highlights a German celebrity deepfake scandal in Germany and plans for the creation and distribution of pornographic deepfakes to become an explicit offence.

The Independent covers criticism of the FCC Chair's threat to take away the licences of broadcasters he says are promoting "fake news" about the Iran war with the move called “flagrantly unconstitutional".

The Bureau of Investigative Journalism examines 'Danny Bones', a fake AI-generated rapper created to spread white nationalist messages targeting British Muslims, paid for by Advance UK.

Liz Kendall has said the UK needs to move faster to combat deepfakes being used to target women and girls, The Guardian reports.

ABC News looks at an Australian senate report warning that the country's information ecosystem is threatened by climate misinformation attempting to derail climate action and polarise debate:

"Worryingly, the committee heard that Australia has some of the highest levels of concern about information integrity globally."

National Observer highlights a report from the Media Ecosystem Observatory which found 100 users were responsible for almost 70% of online conspiracy posts from influential accounts in Canada.

The Guardian covers research showing ChatGPT Health failed to recommend going to hospital in half of 60 realistic scenarios where independent doctors agreed a visit was medically necessary.

AP tracks the many false or misleading claims on immigration, finance, and war in Trump's State of the Union speech.

Research from Science Feedback suggests in Europe TikTok carries the most misinformation of any major platform and that accounts sharing misinformation generate significantly more engagement. Meanwhile a team of UCL researchers have created a new AI tool to detect the risk of misinformation in diet, nutrition and health content.

Like misinformation, 'information integrity' is one of those umbrella terms that requires some unpacking.

The UN defines information integrity as referring to 'the accuracy, consistency and reliability of information' (more in this briefing).

Initially focused on information that supported human rights and democratic freedoms, the term is now widely used to describe information ecosystems that support trustworthy information on a range of topics: eg climate information integrity, health information integrity.

Why is this important?

For one thing, it recognises misinformation is only part of the problem: Information integrity covers promoting systems and processes that deliver 'healthy information' as well as efforts to counter misinformation.

While we should call out the Generators of misinformation who profit (financially, politically) from spreading false or misleading claims, too often we blame people who consume these claims (Believers and Neutrals) rather than looking hard at if there are gaps in the trustworthy information available to them - often called 'information voids'.

Information integrity, then, is about filling these gaps as well as countering misinformation attacks and false narratives.

But what does this mean in practice?

From my experience with health information, the route to information integrity is long and winding and made up of many individual steps and decisions and not as simple as just telling people the facts.

Instead of starting with what you want to tell people, the journey begins with researching what audiences vulnerable to misinformation want to know. What information are they looking for? What questions are they asking? Where do they go to find information? What's missing from existing sources? Who might they trust to tell them the truth?

It's humbling to realise that 'perception is reality' when it comes to information on a topic. You might think your information is accurate, consistent, and reliable, but if your audience disagrees they will look for answers elsewhere that are more accessible, more relatable, and more relevant to their questions and concerns.

Anyone wrestling with information integrity is likely to suffer the communications equivalent of the long dark night of the soul.

You'll sweat over tools like AlsoAsked, that show the deceptively simple questions people are really asking that are in fact very hard to answer in a concise, accessible and relatable way.

You'll ponder long and hard on how to build trust, considering things like maximal transparency on any perceived conflicts of interest, and that if the Age of Authority is over and the Age of Authenticity is here (as Renée di Resta puts it: "Influencers are the opinion formers of the internet age" [1]), what does that mean for institutional communications?

One of my mantras is explain, explain, explain because the harder you look the more you see that, in the case of health topics at least, most information sources don't explain things enough, leave voids and questions unanswered that lead people to turn to less reliable sources.

It can be tough to fill these voids by, for instance, creating a Q&A article or simple messages for social media, when even reliable sources can't agree on things like what the first mRNA vaccine was, or say a vaccine has a 'good safety profile' without providing any data to back it up.

Just deciding when and where to intervene to curate, create, or defend information takes time and resource. The crisis communications model of Monitor > Analyse > Respond [repeat] can help, something we drew on in LSHTM's response to the UK meningitis B outbreak, looking for what crucial information wasn't out there, and then seeing if we could provide it (sometimes not intervening if you don't have the right expertise/answers is the right move).

The trick is to know when to talk and when to listen, moving beyond broadcast only comms towards an approach closer to dialogue, that empathises with those who are confused, scared, or angry and takes to heart their genuine questions and concerns (while ignoring bad faith arguments and those only seeking to polarise).

Even those we expect to protect information integrity can inadvertently damage it and public trust, recent examples include the MHRA and manufacturers being taken to task for leaflets suggesting behavioural side effects of drugs used to treat Parkinson's are 'uncommon', or DHSC having to retract a claim that sunbeds are 'as dangerous as smoking'.

So just as we are all vulnerable to misinformation so we are all potentially at risk of undermining information integrity.

Ultimately, holistic approaches that combine warning people of misinformer tactics, pre- and de-bunking specific misinformation attacks, and curating, creating, and defending accurate, consistent, reliable information are our best bet for improving the information environment.

But, like counter-misinformation interventions, all of these efforts rely on people being motivated to engage with 'healthy' information, when we know misinformer messages are often simpler, more emotive, and more compelling. I'll return to the topic of motivation another time.

UK policy update

A few weeks ago with little fanfare the UK Government published its media literacy action plan A Safe, Informed Digital Nation.

Media literacy should feature in any diagram of the different interventions needed to protect people from misinformation, overlapping with information integrity and counter-misinformation work. As Finland shows, media literacy can harden societies against disinformation attacks.

So what's in this plan for 2026-29?

There are 9 mentions of 'misinformation' and plenty that's relevant for counter-misinformation work, including the ambition that:

"By 2029, more people can confidently recognise misinformation and know where to find trusted information."

Although how this will be evaluated is unclear. There are some interesting actions around boosting trusted information and news sources:

"We will explore options to ensure the accessibility and availability of trusted information sources online... This includes helping trusted voices reach more people when false or misleading information is spreading virally online..."

Alongside this are other actions including piloting a media literacy campaign in Yorkshire & the Midlands, a local media strategy, and exploring requiring the BBC to promote media literacy.

Different departments/ALBs will be responsible for countering misinformation in their own areas. In health DHSC and UKHSA will 'support the public to access clear, accurate information' with UKHSA continuing to focus on 'vaccine communications, as well as health protection priorities, incidents and outbreaks' while DHSC will:

"...build media literacy skills through its Men's Health Strategy helping men critically assess health information and protect themselves against harmful misinformation."

There's also a welcome focus on evidence gathering:

"As part of the Government Chief Scientific Adviser’s Year of Trustworthy Information, the Government Office for Science has commissioned an evidence review on misinformation, focusing on how to measure the problem, assess harms, and counter or mitigate these harms."

And, building on previous announcements, it details actions to strengthen media and digital literacy in the National Curriculum.

Yet welcome as these actions are, examining the plan more closely highlights what's missing as much as what's there.

Extremism, misogyny, health, and elections are mentioned, but there's nothing about tackling misinformation on climate and net zero, and no actions for bodies such as DESNZ, DEFRA or the Cabinet Office.

The plan talks a lot abut local and place-based approaches looking to 'strengthen local authorities’ capability to respond swiftly and confidently to harmful online information and digital threats' and 'Teaching media literacy in trusted local spaces, such as libraries and youth clubs'.

Yet beyond references to the £11.9m Digital Inclusion Innovation Fund (now closed) it has little to say about how such local initiatives might be funded, at a time when local authority budgets are under huge financial pressure and many libraries and youth clubs are at risk of closure.

There's a single mention of 'universities' alongside charities as partners (and a Durham University case study) but with no indication of what ongoing support is available to universities to scale up media literacy work or undertake further research in this area.

What it adds up to are some fine ideas but a lack of detail on how aims will be accomplished or what resources and political will exists to take this plan forward. With DSIT, DCMS, DfE, Ofcom, UKHSA, DHSC etc all being responsible, the question is: who is really accountable?

If media literacy and allied counter-misinformation and information integrity efforts are not to fall between the cracks it seems to me that some kind of single national body, like the National Cybersecurity Centre, is needed to ensure progress within individual departments isn't sacrificed to other priorities.

Funnily enough exactly this kind of national body is what the House of Commons Foreign Affairs committee has called for in their new report, Disinformation diplomacy, published on 27 March.

The report analyses disinformation threats from Russia, China, Iran and non-state actors and concludes the 'intensity of hostile activity orchestrated by Russia across the West is that of a state at war', with Committee Chair Emily Thornberry commenting disinformation is "the new warfare and open liberal democracies are sitting ducks".

To counter this threat it recommends a series of actions including public awareness campaigns about foreign disinformation, defending media freedoms and creating a statutory, public-facing National Counter Disinformation Centre. Perhaps it's time the UK took the threats from both foreign and domestic misinformation more seriously.

Missed much?

Last time I told the inside story of the launch of LSHTM's new Health Information Integrity Network (HIIN). Still catching up? Check out my review of misinformation in 2025.

Notes

This edition is dedicated to the memory of Spectrum of Misinformation supporter Ruth Francis.

[1] Invisible Rulers, p.48