Slopageddon + News/Analysis

A light grey dustbin with a dark grey letter icon and black number 14 on it leaks green goo

The Spectrum of Misinformation returns with misinformation news, analysis & practical advice for communicators.

In News digest, UK warned of complacency on misinformation, lethal chemo conspiracies, fake claims about real US crowds, AI news fails, Kremlin-friendly chatbots, radicalising Facebook groups, raw milk shills, Tylenol claim climbdown, Have I Got (Fake) News For You, send in the PR bots, Plus: do we need a National Disinformation Agency? And the politicians targeted by (and creating) deepfakes. Find out why we should start Prepping for Slopaggedon, munch on some Healthy reading, before a taster of a special edition in Review: incoming.

News digest

The Guardian covers the Commons science and technology select committee accusing the UK Government of being complacent about the threat of online misinformation as well as 'gaps' in the Online Safety Act around GenAI. Chair Chi Onwurah said: "It is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated."

BBC reports a conspiracy theorist influenced her daughter's decision to reject chemotherapy in favour of alternative treatments. The coroner said this influence "did contribute more than minimally" to the death of 23 year-old Paloma Shemirani who had non-Hodgkin lymphoma.

AFP disproved claims MSNBC footage of Boston crowds was from 2017, confirming it showed a 2025 'No Kings' protest against Trump.

New York Times highlights Trump's sharing of flattering AI slop videos or images of himself which he also uses to attack opponents.

A BBC study shows AI assistants misrepresent news content 45% of the time, with 31% of responses showing sourcing problems and 20% containing major inaccuracies. Gemini was the worst performer.

Popular chatbots cite Russian state media in almost 1 in 5 responses to questions about Ukraine, the Institute of Strategic Dialogue reports. ChatGPT cited the most Russian sources while Gemini was most likely to respond with safety warnings when asked about the topic.

BBC covers Nathan Gill, Reform UK's former leader in Wales, pleading guilty to 8 counts of bribery relating to making pro-Russian statements while being a Member of the European Parliament.

Rusi argues a National Disinformation Agency is needed to defend the UK's 'cognitive resilience' with no single agency responsible for countering state-sponsored disinformation campaigns.

The Guardian investigates far-right Facebook groups and radicalisation, finding that 1 in 20 posts share misinformation or conspiracy theories. Anki Deo  said: “In some cases, the content about immigration is an entry point. From then onwards, people are exposed to a whole range of conspiracy beliefs and ideas.”

AP charts how those behind the Make America Healthy Again movement aim to profit from US anti-science bills allowing the sale of raw milk, despite the risks of contamination and disease.

USA Today covers RFK Jr conceding there is not "sufficient" evidence to make a definite link between taking Tylenol in pregnancy to autism in children but making the unsubstantiated claim "it is very suggestive".

PoliticsHome highlights the BBC having to apologise for an episode of Have I Got News For You airing the false claim Euan Blair-owned firm Multiverse would produce the UK Government's digital ID scheme.

Press Gazette reports how journalists are being bombarded with AI-generated press releases featuring fake stories and expert quotes created and sent by the PR tool Olivia Brown.

The UK is one of the signatories of the Paris Declaration on information integrity and independent media.

Politico covers a fake video in the Irish election and, in the Netherlands, Geert Wilders apologising for members of his party being behind a Facebook page sharing incendiary GenAI videos of a political rival.

Prepping for Slopaggedon

Last year everyone was worried about deepfakes, AI-generated videos so convincing you couldn't tell them from the real thing.

The argument went that they would swing elections, spark riots, steal the identity of celebrities and politicians.

Writing in January 2025, I wondered if deepfakes were really necessary given context-free video clips, cheapfakes, and other successful low-rent approaches already used by misinformers.

Then in September 2025 Sora 2 arrived. And Reader, I was wrong...

Sora 2, launched in the US and Canada, isn't the first app to generate video from text prompts. But it makes creating fake videos of dead celebrities and historical figures, as well as live influencers who give consent, easy and simple to share on its social platform - creating a kind of TikTok for deepfakes.

Smashing one million downloads in its first five days, people used Sora 2 to create content many will find amusing alongside the disturbing, offensive, and plain bizarre: Martin Luther King Jr talking about defecating on himself, Hitler in a shampoo commercial, fake videos of Robin Williams sent to his daughter causing her to plead "please stop".

Commenting in The Guardian, Gen-AI expert Henry Ajder called it:

“a worrying situation if people simply accept that they’re going to be used and abused in hyperrealistic AI-generated content.”

While celebrity AI slop grabbed the headlines it's the ability of GenAI apps to create realistic videos of current events that's more worrying.

A recent example was a deepfake video news report falsely suggested leading candidate Catherine Connolly had withdrawn from the Irish presidential election. Widely shared on Facebook before Meta took it down, it's unclear what was used to create the video but it shows how apps like Sora 2 (or Meta's Vibes, or Google's Veo 3) could be misused.

An investigation by NewsGuard suggests Sora 2 is a "willing hoax generator", producing realistic videos promoting provably false claims 80% of the time (16 times out of 20 attempts).

In just five minutes NewsGuard created videos based on Russian disinformation about Moldovan officials ripping up ballots, created realistic fake video news reports of a toddler detained by ICE and a made-up claim US migrants were banned from sending money abroad.

They also showed watermarks that are supposed to ensure Sora 2 videos are identifiable can be removed in as little as four minutes.

As NPR write, the danger isn't only that people might believe that the events depicted in these fake videos are real, just their proliferation enables bad actors to claim the 'liar's dividend': dismissing real footage and real news reports that undermine their agenda as 'fake news'.

Sliding towards Slopaggedon, our feeds filling with deepfakes, the risk is we become so jaded and cynical we reject reality as just another Matrix-style illusion, as ex-TikTok trust manager Daisy Soderberg-Rivkin says:

"I'm less worried about a very specific nightmare scenario where deepfakes swing an election, and I'm really more worried about a baseline erosion of trust... In a world where everything can be fake, and the fake stuff looks and feels real, people will stop believing everything."

So if deepfake generators are here what do we do about them?

One hope is that the platforms themselves will consistently label GenAI content to make it easier to tell synthetic from real footage.

But an audit from Indicator shows self-regulation isn't working, with only just over 30% of posts being correctly labelled as AI-generated. Google and Meta regularly failed to identify videos made with their own GenAI tools, TikTok spotted videos made with its GenAI but not others.

Another option is to follow Denmark and combat malicious deepfakes by looking to change copyright law to ensure everyone has legal rights over their own body, facial features and voice.

The UK Culture Secretary supports the idea content from public service broadcasters could be made more prominent on major video sharing platforms, although how this would be enforced isn't clear.

It's important to remember, however, that GenAI is just a tool and while the videos it creates are 'synthetic' that doesn't necessarily make them 'fake news'. Take, for example, Channel 4 revealing the host of its documentary Will AI Take My Job is 'Britain's first AI presenter' or, more controversially, charities using GenAI images to depict real poverty.

AI expert Simon Piatek suggests what's really needed is labelling that separates false or misleading human-produced/GenAI content from human-produced/GenAI content based on trustworthy sources.

As MPs and the UK Government argue about whether GenAI content is covered by the Online Safety Act, and the EU takes steps to regulate AI, the bigger question is how much are tech platforms responsible for harmful deepfakes created on their platforms and how can they, and those creating them, be held accountable?

It feels like regulators need to act fast before the deepfake genie escapes the Sora 2 bottle.

Healthy reading

In a recent substack WHO's Oliver Morgan explores how AI Slop is contaminating public health information and what we can do about it.

Don't have time to trawl the latest health misinformation research? Read this American Psychologist consensus statement which sets out the evidence for why people believe and what interventions work.

Gavi reports on new research showing even small amounts of misinformation can amplify the spread of disease during an outbreak.

Review: incoming

As the nights close in and festive plans are debated it's almost that magical time when we look back at the year that was 2025 and ask: what the hell just happened?

Wasn't there a story about Meta's fact checkers? Why did the BBC have a beef with Apple's AI? Where was there an outbreak of measles misinformation? Plus: Fake court cases generated by AI, Robo-Biden, UFO conspiracies, Trump, RFK Jr & vaccines <repeat>, Motability madness, scapegoats, weather weapons and chemtrails, Russian disinformation, GenAI wildfires spreading like... wildfire.

Yes, you won't want to miss 2025: The year in misinformation, trust me, it's going to be a wild, wild ride...