News/Analysis/Tech update

A pink letter icon with the number 9 on an orange signal wave with a vector graphic landscape backdrop

The Spectrum of Misinformation returns with misinformation news, analysis & practical advice for communicators.

In News digest, read about Farage's claims about mental health diagnoses, Pope misinformation, RFK Jr's war on the facts about measles and autism, Portugal's response to dubious election claims, TikTok birth control video takedowns, DOGE's unevidenced claims of savings, and comment pieces on vaccine misinformation, censorship, and the role of PR. Find out where fake likes and reviews come from in Bots get mobile, upgrade your kit in The counter-misinformation toolbox and catch-up with Did you miss (me)?

News digest

The Guardian covers criticism of claims by Nigel Farage it's too easy to get a mental health diagnosis from a family GP over Zoom. Mel Merritt of the National Autistic Society said: "no one has got an autism diagnosis through the GP – this is just incorrect, wrong, fake news."

France24 highlights how Pope Francis's death led to a wave of misinformation with manipulated video clips, misleading photos and AI-generated images. The pontiff previously warned: "There is no such thing as harmless disinformation" and that AI technologies "can be misused to manipulate minds".

CBS News reports RFK Jr will ask the CDC to produce guidance on treating measles with drugs and vitamins despite vaccination being the only way to prevent the disease. Elsewhere The Guardian covers his false claims MMR contains 'aborted fetus debris' while PolitiFact analyse his misleading claims about the lives of people with an autism diagnosis. His move to create a national autism database has been criticised as 'a slippery slope to eugenics'.

Scientific American carries an opinion piece from Dan Vergano arguing the US is a victim of a 'brainwashing campaign' using conspiracy thinking and misinformation to undermine trust in vaccines. Meanwhile Statista charts widespread belief in measles misinformation in the US.

Euronews highlights Portugal has set up a rapid response system to monitor and report misleading claims in the run up to a snap election on 18 May, including false claims about immigration and foreign prisoners.

An investigation by The Independent into birth control misinformation on TikTok led to videos that promoted herbal products or made false claims about fertility and cancer risks being taken down.

BBC Verify shows there is a lack of evidence of huge savings to the US budget from cuts made by DOGE with some claims unevidenced or based on "speculative, never-used figures".

Reuters Institute finds 71% of 1,130 UK journalists surveyed considered the role to ‘counteract disinformation’ very or extremely important.

In TIME Sander van der Linden and Lee McIntyre write that empowering people to spot manipulation is the opposite of censorship and that 'Prebunking, debunking, and fact-checking are examples of “counter-speech"... [that] allow more speech to happen' so that people can make up their own minds.

Forbes carries an opinion piece on the role PR can play in combatting misinformation with tips on how to cope with the challenges.

An editorial in Science suggests that instead of using the term 'scientific consensus' we should talk about 'convergent evidence'.

European Commission has launched a €5m call for projects to tackle disinformation and enhance media literacy in the EU.

Bots get mobile

One of the most thought-provoking recent reads is this article in Fast Company on 'bot farm amplification'.

Bots, software apps that perform automated tasks such as crawling the web, have been used for some time in collectives ('bot farms') to amplify content on social media, spread misinformation, and distort online reviews via fake accounts impersonating real humans.

There have been attempts to kerb bot activity, eg Musk claiming he'd purge bots on X and US action against pro-Russia bot farms.

But now bot farmers have a new weapon: mobile phones.

Eric Schwartzman writes this new type of bot farm consists of:

hundreds and thousands of smartphones controlled by one computer. In data-center-like facilities, racks of phones use fake social media accounts and mobile apps to share and engage... [with] coordinated likes, comments, and shares to make it seem as if a lot of people are excited or upset about something...

He explains these smartphones 'are set up with SIM cards, mobile proxies, IP geolocation spoofing, and device fingerprinting' making it much harder to tell them apart from human users.

This hardware is being combined with AI, hooking up tools such as ChatGPT, Gemini, and Claude, to move beyond bot spam messages ('copypasta') and create personas that look and post like real people.

It makes me think of a reverse version of The Matrix: instead of humans hooked up to machines projecting their avatars into a simulation, these AI personas hooked up to mobile phones are projecting replicants into the real world, bots sophisticated enough to pass for human.

The article suggests these innovations are already having major impacts in finance and commerce, being used to drive stock prices up or down and generate fake reviews that make star-ratings worthless.

But from a misinformation perspective what's most worrying is the implication that under-30s looking for answers on TikTok and Instagram will be fed not just content selected by manipulative algorithms but by algorithms that have been distorted by bots that pass for human.

Just as Google's well of information is being 'poisoned' by AI, it suggests video-based content that many trust as 'more authentic' is also being poisoned in a way that's less obvious than an ad-banner flash of inaccurate AI results or generic articles by fake news mills.

The hope, I suppose, is that as this 'coordinated inauthentic activity' begins to seriously affect the bottom line of companies like Amazon, it will suddenly become in the interests of big firms to do more to tackle the bot menace, for instance making the threshold for leaving reviews or creating new profiles much harder and linked to external forms of validations that are harder to fake.

Ultimately, if people eventually clock that so many of their online interactions are fake (as with the bot-ridden X), like Neo they may choose to wake up, disconnect, and look for more authentic connections and information sources IRL. But judging by how many people are addicted to their smartphones don't hold your breath...

The counter-misinformation toolbox

Technology may be part of the misinformation problem, but it can also be part of the solution. Every day I'm coming across new tools and resources, so I thought it'd highlight the free/affordable ones I've found most useful for counter-misinformation work.

Microsoft CoPilot

I'm against the use of AI in generating end product (apart from ethical concerns, the content is often inaccurate and poor quality) but research is one area where it can be helpful.

For example, without paying thousands for social listening platforms, it can be tricky to survey what misinformation is circulating on a particular topic. A quick and easy way to get an idea is to use a tool like CoPilot.

CoPilot is powered by GPT-4, the same engine that drives ChatGPT-4, bringing many of the same benefits and limitations as the famous chatbot, but with the standard browser option free to use.

Ask CoPilot: 'what is common misinformation about [insert topic]' or 'what are conspiracy theories about/controversies around...' and you'll get some helpful answers. The answers may not be accurate but the crucial thing is for this kind of research they don't have to be, they just need to be representative.

AlsoAsked

AlsoAsked trawls live 'People Also Ask' data from Google searches on a topic and shows you related questions Google users ask.

What it generates is a diagram, a bit like a family tree lying on its side, showing how each question relates to a series of sub questions, for example: is the measles vaccine safe? > What are the serious side effects of the measles vaccine? > How to protect a baby from measles? (see full diagram below):

It's a powerful tool for understanding the concerns/questions of those who may be impacted by misinformation on a specific topic.

And it can help not just to rebut common misinformation but to research what other high-quality information people would find helpful to better understand an issue. The advantage over something like CoPilot is that it accurately reflects real questions people are asking.

The basic version is free, but only gives you 3 credits (searches) every 24 hours and is limited to live results. Other pricing plans give you more credits and enable you to look at up to a year's worth of search history.

SparkToro

I only recently got access to SparkToro, so can't give a full evaluation yet but can already see it's an interesting tool.

While AlsoAsked focuses on specific questions, SparkToro looks at the behaviour of audiences interested in particular topics.

Type in a keyword, like 'measles', and it will give results from data integrating Google search data with anonymised clicks and public social media profiles to tell you the websites people searching for this term visit, or what social media or podcasts they consume.

If there is enough data available it can also give a detailed demographic breakdown of the audience concerned.

It enables you to build up a picture of audiences looking for information on a topic, and insights into what channels you might be able to reach them on, something invaluable for counter-misinformation interventions. I plan to update when I've given it a proper test drive.

SMIDGE

SMIDGE isn't a tool, it's a resource giving examples of videos identified by EU researchers as containing extremist narratives.

It can be hard to find good examples of misinformation, especially on platforms such as Facebook and TikTok where the algorithm shows different content to different people.

I found the list page helpful where you can reorder and search examples of extremist narratives, including misinformation attacks, by category, platform, country etc.

The latest videos are from 2024, with examples back to the 2000s, so not that useful for what's happening now but good for examining how misinformation on topics such as climate and pandemics has evolved.

Did you miss (me)?

April is the cruellest month if you are both working on a misinformation campaign and trying to write a newsletter about (you guessed it!) misinformation, hence the delay.

Hopefully you filled the void by rereading my last newsletter about doubt, or previous editions looking at the challenge of mis-leaders and the surge of US misinformation, or recharging with that fount of fake news The Generator.