Review of 2025/What next?
What just happened? Find out in a special edition bursting with misinformation news, analysis & practical advice for communicators.
In News digest, deepfakes target politicians & terror victims, X location fibbers, ChatGPT suicide case, GenAI image stops train, Russian disinformation and the German elections, CDC website goes anti-vax, the deepfake doctor will see you now, Japan earthquake 'human-caused' claims, Plus: the law vs AI assistants, fake social media verifications, teaching misinformation in primaries. Then join us for the wild ride that is 2025: The year in misinformation, find out What's next, and catch up on This year on The Spectrum.
News digest
The Guardian investigates a network of over 150 YouTube channels spreading fake AI-generated videos attacking Labour and Keir Starmer.
ABC News reports on deepfake imagery of victims of the Bondi Beach terrorist attacks shared as part of antisemitic and racist disinformation.
CNN highlights the case of the parents of Zane Shamblin who are suing OpenAI over the role they believe ChatGPT played in his suicide.
BBC reports X's location feature reveals many high-profile pro-Trump accounts pretending to be American are based outside the US. Some may be run by nation states while others monetise polarising content.
New York Times explores how fake AI-generated videos made using Sora 2 are flooding people's social media feeds, often with any watermarks removed, and fuelling false narratives.
BBC covers how trains were cancelled after a fake AI-generated image of damage to Carlisle Bridge was circulated on social media.
DW reports Germany has accused Russia of attempting to influence February's federal elections by spreading online disinformation, including videos alleging ballot manipulation.
An investigation into the Free Birth Society reveals misinformation about the risks of giving birth without access to medical care has been linked to a series of infant deaths (The Guardian).
NPR highlights the CDC website has been changed to state that a link between vaccines and autism cannot be ruled out despite decades of research finding no connection. A new WHO comprehensive review reaffirms the consensus that vaccines do not cause autism.
The Guardian looks into how TikTok and other platforms are hosting deepfake videos of real doctors with their words manipulated to sell supplements and spread health misinformation.
Japan Times examines misinformation spread following the Aomori earthquake including claims the disaster was 'human-caused'.
PoliticsHome covers OFCOM investigating if social media platforms are doing enough to identify and remove illegal terror and hate content.
Tech Policy Press reports on new research from the Ada Lovelace Institute suggesting that current UK law does not protect people from the harms of using advanced AI assistants.
University of Cambridge has launched a new site that tracks the price of SMS verifications needed to create fake social media accounts.
Schools Week carries an opinion piece on the challenges of teaching UK primary school children about misinformation.
2025: The year in misinformation
The omens weren't good: 2025 began with a WEF report ranking misinformation & disinformation as the top global threat, ahead of extreme weather and armed conflict.
The Doomsday Clock, which counts down to humanity's ultimate destruction, moved one second closer to midnight thanks to "the spread of misinformation, disinformation and conspiracy theories" greatly exacerbating the dangers from warfare, AI and disease.
And, on 20 January, Donald Trump was inaugurated as President of the United States and began to appoint a cabinet of misinformers...
The big problem with reviewing the last 12 months is, so much happened, where do you start?
Plot twists galore
One thing that's striking is how many misinformation attacks, which at first glance appear distinct and spontaneous, arrange themselves into the sort of wild storylines you get in a Netflix thriller.
Take attempts to link a rise in diagnoses of autism with vaccines and medicines: March saw a surge in vaccine misinformation around a measles outbreak in Texas and the US health department hire discredited anti-vaxxer David Geier to study links between vaccines and autism, despite decades of research finding no link.
RFK Jr made misleading claims about the lives of people with autism in April and by May was falsely claiming MMR vaccine contains 'aborted fetus debris', moving to create a national autism database, and asking CDC to produce guidance on treating measles with drugs and vitamins, despite vaccination being the only way to prevent the disease.
In June RFK Jr stacked the new CDC vaccine panel with members known for spreading vaccine misinformation. August saw a backlash against an FDA panel's inaccurate and misleading claims linking SSRI drugs used to treat depression in pregnancy and miscarriage and autism, and a deadly shooting at the CDC, with the gunman blaming COVID-19 vaccines for making him sick.
Then in September experts rejected Trump's unsubstantiated claim taking Tylenol (paracetamol) in pregnancy causes autism in children. And by November the CDC website had been altered to suggest a link between vaccines and autism could not be ruled out.
Analysing 12 months of attacks can reveal fresh patterns. For example how foreign actors use different types of attack, a kind of 'hybrid information warfare', to degrade the enemy's information ecosystem:
In January a study suggested 50% of German TikTok users doubted Russia is spreading fake news. We were warned in September Russian disinformation and a fake news network may have disrupted elections in Moldova, while in October the Institute of Strategic Dialogue reported popular chatbots cite Russian state media in almost 1 in 5 responses to questions about Ukraine.
And then in November Nathan Gill, Reform UK's former leader in Wales, was sentenced to over 10 years in prison after admitting taking bribes for pro-Russia interviews and speeches. Following the verdict, in December the government ordered an independent review into foreign financial influence in UK politics.
Tracing such narratives, and the connections between the actors involved, makes you long for one of those 'murder walls' beloved of crime dramas with photos and post-its joined up with string.
Big Tech, big problems
A standout theme from 2025 was the role that apps and tech platforms play in the spread of misinformation.
In January the news Meta was ditching independent fact-checkers on Instagram and Facebook led to fears of a flood of disinformation and harmful content. A later analysis confirmed community notes, which Meta used instead, are easily derailed or evaded.
Apple suspended its AI-generated news summaries in January after BBC complaints about inaccurate headlines. A BBC report in February found four major AI chatbots included significant inaccuracies in summaries of news stories.
But the big story was the arrival of OpenAI's Sora 2 in September, not just an app for creating and sharing realistic GenAI videos but a "willing hoax generator" according to NewsGuard. They showed how easy it was to break its flimsy guardrails and make fake news reports pushing distortions and lies.
Nobody, it seems, cared about the ethical and legal implications of releasing such deepfake tools out into the wild. October provided a good example of what can happen with a deepfake video news report that falsely suggested leading candidate Catherine Connolly had withdrawn from the Irish presidential election (for evidence UK politicians are not immune see News digest).
As for every misinformer's favourite platform, X, November changes revealed many high-profile accounts about US politics were based outside the US and making money from fuelling political rows. Pro-independence accounts claiming to be from Scotland were also found to be run by users outside the country.
UK complacency warning
One long-running saga was: what would the UK Government do to tackle the kind of far-right disinformation that fuelled violent disorder following the Southport stabbings?
In 2024 Bridget Phillipson promised to embed critical thinking in the English school curriculum and help arm children against “putrid conspiracy theories”. With no details of this curriculum review forthcoming, by July 2025 a UK Science, Innovation and Technology Committee report was warning the Online Safety Act wasn't protecting the UK from this kind of dangerous misinformation.
In August UK police forces were encouraged to disclose the ethnicity and nationality of suspects in high-profile cases to help counter disinformation. By October the Government was accused of being complacent about the threat of online misinformation and 'gaps' in the Online Safety Act around GenAI which could see the misinformation-fuelled summer 2024 riots repeated. That same month an investigation revealed a US billionaire who backs fake news sites spreading anti-Muslim conspiracy theories funded Tommy Robinson.
Finally, in November a National Curriculum Review was published including recommendations for incorporating teaching about misinformation and disinformation at primary level. Misinformation features in chapters on digital and media literacy but questions remain about how teachers will be supported to deliver lessons on such polarising and contentious topics.
The truth is out there
Conspiracy theories provided some of the strangest stories of the year.
Perhaps strangest of all was the claim that the Pentagon deliberately spread misinformation to fuel UFO conspiracy theories about Area 51. A Wall Street Journal investigation suggests the military circulated fake photos and omitted key details to protect secret US weapons programmes.
Trump was, as ever, in the thick of the action first sharing a baseless conspiracy theory Biden was executed in 2020 and replaced with a robotic clone before posting then deleting an apparently AI-generated video of himself promoting the QAnon 'medbeds' conspiracy theory of futuristic pods that can cure disease and regrow limbs.
Then there was the conspiracy theory about Bovaer, a cattle feed additive proven to reduce emissions of greenhouse gas, that dragged in Reform MPs, the dairy industry and Bill Gates.
But, as entertaining as they can be, conspiracy theories are often disturbing and sometimes dangerous. Take a son's year-long attempt to persuade his father not to believe conspiracy theories. Or the tragic case of 23 year-old Paloma Shemirani who died in 2024 after refusing chemotherapy in favour of alternative treatments. A coroner said her conspiracy theorist mother influenced her decision contributing "more than minimally" to her death.
US and us
Looking back it's impossible to ignore the seismic impact US figures have had on the global misinformation ecosystem.
Trump, Musk, RFK Jr, Vance et al have unfortunately proven my prediction that this would be a government of misinformers correct.
In Trump's first week in office he shared a flurry of false or misleading claims about elections, wildfires, and immigration. In the months that followed he would make misleading claims about USAID payments to Politico and "fake news media", repeat false claims on US funding for Ukraine, inclusion scholarships, and social security payments to dead people, and use video taken in the Democratic Republic of the Congo to try to back up his false claims about the mass killing of white farmers in South Africa. All the while sharing flattering AI slop videos or images of himself which he also uses to attack opponents.
Musk, acting as Trump's attack DOGE, used X as a megaphone to launch attacks on USAID, sharing conspiracy theories and making unevidenced claims it was "evil" and "a criminal organization", and shared a slew of false claims about the UK and grooming gangs. Even as UK rightwing parties praised DOGE's work to cut the US budget it was revealed many of its 'savings' were unevidenced or based on speculative, never-used figures.
RFK Jr shared misinformation about measles, vaccines, and treatments (see Plot twists galore) while JD Vance made it his mission to sow falsehoods about European governments censoring the media and annulling elections.
It's not hard to see why the same US officials spreading misinformation were also keen to cancel misinformation research, threaten media outlets, and, just this month, start denying visas to factcheckers.
The fightback begins
Before we collapse under the landslide of misinformation that was 2025, it's important to note this year also saw many positive signs of a global fightback against false and misleading information.
Other countries could learn from Finland's national media literacy policy which involves teaching children and adults to spot disinformation, with its success attributed to trust in authorities and the media.
Canada launched ElectoFacts, an online tool to debunk false claims about its elections, while Portugal set up a rapid response system to monitor and report misleading claims in the run up to its snap election.
Denmark is preparing a bill, likely to be passed next year, which would ban the sharing of deepfakes to protect Danish citizens’ personal characteristics, such as their voice and appearance, from being imitated and shared without their consent.
In Japan Fujitsu is partnering with academic institutions to develop an AI fact-checking platform they say could help to make quick judgements about the accuracy of online content.
What about the UK?
Alongside recommending misinformation and disinformation should be taught in English primary schools, ministers expressed support for the idea that Public Service Media content should be prominent on major video sharing platforms and signed the Paris Declaration on information integrity and independent media.
My own institution, the London School of Hygiene & Tropical Medicine (LSHTM) has also joined the fray: In January we launched our counter-misinformation principles for communications, by March we'd published the first in a new series of explainer articles, in July we delivered in-house training on misinformation to staff, and in September organised our flagship event, Health Misinformation UNPACKED, which attracted over 1,000 attendees from universities, the NHS, charities, NGOs, and government. To coincide with the event we raised concerns that false and misleading health advice from social media and AI tools is putting lives at risk and launched a call for a network to fight dangerous health misinformation in the UK.
What next?
So what lessons can we learn from this year in misinformation?
One is that misinformers are fast and agile, constantly switching to new targets and tactics, while the response of authorities in places like the UK and EU remains sluggish and uncoordinated.
While the UK Government and MPs continue to argue over whether the Online Safety Act covers AI tools, Musk is feeding rightwing talking points to Grok and China's DeepSeek chatbot is censoring its answers on sensitive topics.
In 2026 counter-misinformation responses must be better coordinated if they are to be effective. Rusi, for instance, has argued for a National Disinformation Agency to defend the UK's 'cognitive resilience' with no single agency currently responsible for countering state-sponsored disinformation campaigns.
It's also vital that we learn from the misinformer playbook that's been so effective in the US.
Attacks have weaponised misinformation about 'wasted' public money, 'bias' and 'censorship' to slash big beasts like USAID and the CDC. At the same time threats and funding cuts have scattered the civil society herd of universities and charities (and the media) leaving individual organisations vulnerable to legal action and afraid to intervene.
If institutions elsewhere are to avoid a similar fate then 2026 is the year they must build networks and alliances and look at hardening those public bodies that are the guardians of trustworthy high-quality information against attack.
2025 has also been the year of scapegoats.
For example, UK press claiming the Motability scheme is giving free cars to people who wet the bed or are depressed, when tiny numbers of people with such claims get mobility payments and are very unlikely to qualify for a car. Or claims the UK Government spent £8bn on '‘queer animals’ & pro-trans robots' (£8bn is the entire UKRI budget and the projects mentioned totalled only £10.4m) or that it's too easy to get a mental health diagnosis from a family GP over Zoom, with Mel Merritt of the National Autistic Society responding: "no one has got an autism diagnosis through the GP – this is just incorrect, wrong, fake news."
Whether it's around disability, EDI, mental health, benefits, or immigration, misinformers have shown they are happy to target vulnerable groups if it helps them to profit or polarise.
Much more has to be done to reduce how major social media platforms incentivise outrage and hatred, to educate about scapegoating (and other tactics) as well as calling it out when it occurs.
Ultimately, what I think 2025 teaches us is how easy it is for the public conversation on contentious issues to be captured by false narratives. 2026 is the year we have to find new ways of influencing the public conversation around the benefits of science, public health, climate action, and global development or risk seeing these topics slide into irrelevance.
I'll offer my predictions about misinformation in 2026, as well as sharing news on LSHTM's exciting plans for the year ahead, next time.
2025 in numbers
85% of Instagram and TikTok posts about medical tests provide misleading or potentially harmful advice [DW].
58% of young people (aged 18-34) say they regret a health decision they made based on misinformation [Edelman].
45% of the time AI assistants misrepresent news content [BBC].
80% of the time GenAI app Sora 2 will create videos promoting provably false claims [NewsGuard].
71% of UK journalists consider the role to ‘counteract disinformation’ very or extremely important [Reuters Institute].
2025 in quotes
Peter Marks on resigning over RFK Jr's attacks on established vaccines:
“It has become clear that truth and transparency are not desired by the secretary, but rather he wishes subservient confirmation of his misinformation and lies.”
Alex Mahadevan on X users getting AI Grok to fact-check posts:
"X is keeping people locked into a misinformation echo chamber, in which they’re asking a tool known for hallucinating, that has promoted racist conspiracy theories, to fact-check for them."
Susan Monarez to staff about the deadly shooting at the CDC:
"… the dangers of misinformation and its promulgation has now led to deadly consequences. I will work to restore trust in public health to those who have lost it- through science, evidence, and clarity of purpose."
Daisy Soderberg-Rivkin commenting on the threat from deepfakes:
"In a world where everything can be fake, and the fake stuff looks and feels real, people will stop believing everything."
This year on The Spectrum
In 2025 I wrote about: bridging the trust gap, assessed our threat level, profiled generators of misinformation, predicted a US surge, investigated mis-leaders, considered a question of doubt, gave a tech update, discussed how to misinformation-proof research, sweated over scapegoat summer, picked highlights from LSHTM's health misinformation event, and cleaned up after slopaggedon.
The Spectrum of Misinformation will return in 2026...