News/Analysis/Threat level: 2025

A pink letter icon with 5 in the top right on an orange signal wave and dark blue background.

Welcome to The Spectrum of Misinformation newsletter with misinformation news, analysis & practical advice for communicators.

In News digest, discover issues with China's AI chatbot, Trump's first week false claims, Apple's AI news backtrack, TikTok's German influence, the impacts of Meta's factcheck moves in The Philippines & Myanmar, and new guidance for UK charities. There's a look at the year to come in Threat level: 2025 and lessons for fake news fighters in Notes on a counter-misinformation campaign.

News digest

The Guardian covers fears China's DeepSeek chatbot gives censored answers and could fuel disinformation campaigns, quoting Wendy Hall: “The biggest problem with generative AI is misinformation... It depends on the data in a model, the bias in that data and how it is used – you can see that problem with the DeepSeek chatbot.”

AP details Trump's first week flurry of false or misleading claims about elections, wildfires, and immigration. Meanwhile BBC Verify spends 24 hours on Musk's X timeline and analyses a slew of false claims about the UK and grooming gangs.

FT reports Pete Hesgeth has been confirmed as US Defense Secretary. As highlighted in US: get ready for the surge, Hegseth previously suggested Democrats made up the Omicron variant.

The Guardian writes Apple has suspended its AI-generated news summaries after BBC complaints about inaccurate headlines.

Euronews covers a study suggesting German TikTok users are particularly vulnerable to Russian and Chinese disinformation, with 50% doubting Russia is spreading fake news.

Independents have called for the Australian government to tackle AI-generated misinformation and deepfakes, The Guardian reports, with senator David Shoebridge warning: "generative AI may well have a significant impact on the outcome of the next election”.

VOA Africa investigates how misinformation on social media and in WhatsApp is deepening divides in Burkina Faso.

ABC News explores the potentially deadly impact of Meta ditching professional factcheckers on The Philippines & Myanmar. Jonathan Ong said: "The countries most harmed by unchecked social media … will now bear the heaviest burden of Meta's retreat from accountability."

Civil Society covers a call for all UK charities to plan how to tackle misinformation in the wake of Meta's factchecking changes. Read the Charity Commission guidance.

Threat level: 2025

This month a new WEF report ranks misinformation & disinformation as the top global threat ahead of extreme weather and armed conflict.

A glance at the news headlines suggests why: with a surge of US misinformation driven by Trump and Musk, a misinformer Defense Secretary, Meta's fact-check free fall-out, and generative AI worries.

But I was struck by investigations into misinformation attacks on different groups that show how deep the threat goes.

According to experts, Meta's dropping of professional factcheckers poses a particular risk to middle-aged people on Facebook.

In The Guardian, Hope not Hate's Joe Mulhall said Facebook was "often where you would see a group creating hyperlocal targeted content... We’ve also seen over the last three to four years that anti-migrant protest Facebook groups were really fundamental in organising the targeting of asylum centres.”

Dr Sara Wilford's research suggests some older UK Facebook users tend to trust material at face value and are more reluctant to factcheck.

Wilford described them as: "an invisible generation – sometimes looking back on a life that might not have been as they wanted it to be... But when they go online and interact with other Facebook users, they are embracing an echo chamber that makes them feel good."

Echo chambers like these fuelled Southport stabbings myths that led to violent disorder and threatened to collapse the killer's trial.

In another investigation Bloomberg looked at how US Youtube podcasters bombard young men with rightwing views, mixing politics with discussion of sports, celebrity, and warfare.

The 9 popular podcasts studied featured Trump, Andrew Tate, Tucker Carlson, Musk and other rightwing figures as guests. The podcast audiences were 80% male, the intimate format forging a parasocial bond between generators and their lonely young believers.

Of the 37% of 600 videos mentioning elections half raised questions about elections including false claims about the 2020 US presidential results. 3 out of 10 videos mentioned transgender identity, with trans people often portrayed as aberrant and schools criticised for letting pupils explore their gender identity.

The podcasters consistently portrayed American men as victims of a Democratic campaign to strip them of power.

For me it shows counter-misinformation responses must be just as targeted as attacks. They need to engage with the concerns of specific groups, whether that's alienated young men in the US or a generation of Brits feeling left behind.

Another standout in stories about misinformation threats in 2025 was AI: whether it was AI-generated fake news or worries about deepfakes.

While AI deepfakes are a concern I wonder if they're necessary given the success of other approaches (context-free video clips, cheap fakes, podcasters broadcasting alternative realities).

A comment by Rita Kapur in LogicallyFacts resonated: she said the real AI threat is in the delivery of misinformation more than content creation, with AI "used to further gather data of people's beliefs, confirmation biases... AI is going to drive more polarised, more targeted, deeper filter bubbles. And I think that's going to be very, very harmful."

Thinking about the misinformation threats facing us in 2025 made me wonder if we could do with a misinformation forecasting system.

AI could be used to horizon scan for attacks: trawling social media and news websites to identify specific groups being targeted on particular misinformation topics in different countries, triggering 'extreme misinformation event' alerts so that counter-misinformation campaigns can swing into action.

Perhaps we should all pay attention to the Doomsday Clock, recently moved one second closer to midnight and humanity's ultimate destruction, with the scientists behind it noting dangers from warfare, AI and disease: "are greatly exacerbated by a potent threat multiplier: the spread of misinformation, disinformation and conspiracy theories that degrade the communication ecosystem and increasingly blur the line between truth and falsehood."

Notes on a counter-misinformation campaign #1

This month we launched our own counter-misinformation campaign at the London School of Hygiene & Tropical Medicine (LSHTM).

It may be just the start but it's been a long road to get here. So it was a relief to publish guidance for staff and a news story highlighting LSHTM's counter-misinformation principles for communications, designed to help those responding to attacks avoid common pitfalls.

A key lesson I've learned is that the misinformation battlefront is vast, and it's easy to get sucked into conflicts best avoided or areas beyond your expertise, so it's vital to choose your focus as an institution

One of the first things we did was agree which topics we would prioritise responding to misinformation attacks on.

As a public health university a focus on health misinformation was a given, but beyond that we chose pandemics, vaccines, climate and health, and reproductive health as business critical areas for us that were both likely to come under attack and, crucially, where we had the deep expertise to respond.

Choosing our core topics has enabled us to start gathering the stats, facts, and evidence (eg, on vaccines) that will be needed to craft both proactive content and reactive responses.

Another thing developing this campaign has taught me is the importance of deciding what kind of interventions are feasible.

We decided to focus on communications interventions: addressing high-level misinformation in the media, on social media, and online.

While some ideas may be transferable, communications approaches are necessarily different from community interventions, such as working in-person in healthcare settings or with particular vulnerable groups. Different again are structural interventions, such as legal or technological measures that prevent the spread of false content.

Zoom out and you have general interventions such as media education (see Finland) that aim to help citizens identify and resist misinformation on all topics.

I'd argue that to contain misinformation on an issue you are likely to need several different types of intervention. But most institutions on their own won't have the resources to deliver more than one or two types, with the support of government and other bodies essential for any structural or media literacy interventions.

The last lesson from our campaign so far?

However strong your ideas, you won't get your counter-misinformation campaign off the ground without the support of senior leaders.

Misinformation is such a complex topic and intervening comes with such high risks (institutional, professional, personal) that taking the time to research, think through and explain the challenges and potential options, share examples, and write briefings is an essential (if unglamorous) part of the process.

Only when everyone has a shared understanding and is ready to enter the fray with their eyes open can your campaign really begin...

Want to know more?

My previous newsletter explores The trust gap and how to bridge it, while the feature When to engage: the media & misinformation examines if the media are friend or foe in tackling fake news. And if you're already nostalgic for yesteryear check out my review of 2024.