Almost everyone in the UK now says they have seen misinformation on social media, and growing numbers believe it is distorting elections, damaging public health and undermining trust in democracy. Yet despite years of warnings, there is still no clear measure of how much online content is actually false – and only a tiny minority of people have taken any formal steps to improve their ability to spot it.
An investigation drawing on polling by Ofcom, Ipsos, the Alan Turing Institute, the Reuters Institute and others shows a country that is both saturated with suspect information and deeply sceptical about the institutions meant to provide reliable facts.
In 2024, a nationally representative survey by the Alan Turing Institute found that 94 per cent of UK residents reported encountering misinformation on social media. Separate polling for Ipsos’s “State of Democracy 2025” project suggests that fake news and false information are now seen as the single biggest threat to democracy in Britain, ahead of extremism, corruption and foreign interference.
Regulators and researchers describe the UK as being in a “sustained misinformation moment”: a prolonged period in which almost everyone is exposed to material they think is false or misleading, trust in news and politicians has slumped, and new technologies such as AI‑generated deepfakes are accelerating the spread and sophistication of deceptive content faster than the law can keep up.
What the numbers do – and do not – show about “how much is fake”
Despite increasingly stark warnings, there is still no reliable figure for “what proportion of the internet is false” in the UK or globally. None of the major studies attempts to count or classify every post, video or article. Instead, they ask people what they have seen and how they feel about it.
The Alan Turing survey’s finding that 94 per cent of people have “encountered misinformation” on social media captures perception, not a forensic audit of content. So do Ofcom’s figures, which show that four in ten UK adults say they saw misinformation or deepfake content in the space of a single month in 2024, mostly about politics, elections, health and the environment.
Among those who reported seeing such content, 71 per cent said they had encountered it online, 43 per cent on television and 21 per cent via print newspapers or their digital outlets. Ofcom’s researchers found that younger adults, people from minority ethnic backgrounds, LGB+ people and those with mental health conditions were more likely than average to say they had come across misinformation.
In a separate tracking study of adults’ media use and attitudes, Ofcom reported that 49 per cent of UK adults who use social media said they had seen a misleading or false news story on those platforms in 2024, up from 45 per cent a year earlier. Of those, 13 per cent said they had gone on to share the story themselves “to let others know about it” – tending to add a sceptical or critical comment, but still helping the content itself to spread.
Taken together, the data point to near‑universal exposure: almost everyone online in Britain says they have, at some point, run into content they believe is false or misleading. They do not support claims sometimes seen in commentary or on social media that “half” or “two‑thirds” of everything online is fake. Those figures usually arise from misreading surveys about people’s experiences as if they were measurements of content.
The cost of confusion: belief, doubt and falling trust
If exposure is widespread, confusion is not far behind. A major survey for the fact‑checking charity Full Fact, carried out by Ipsos in late 2023, found that about one in three UK adults – 34 per cent – admitted that they had believed a news story was real until they later learned it was fake. Only 44 per cent said they find it easy to tell the difference between true and false information online about news and current affairs. Half of respondents said it was not easy to work out which sources were trustworthy and which were not.
At the same time, many people think they are behaving more carefully than others. In the same Ipsos polling, 56 per cent agreed with the statement: “I always do further research on the news and current affairs information before I believe it.” Yet 59 per cent agreed that “the average person in the UK doesn’t care about facts about politics and society anymore; they just believe what they want.”
A similar pattern emerges around so‑called “filter bubbles” – the idea that people see only information that reinforces their existing views. Around 61 per cent of respondents said they thought the “average person in the UK” lives in their own internet bubble, seeking out opinions they already agree with. Only 29 per cent said the same about themselves, although this rose to more than four in ten among 18‑ to 34‑year‑olds.
These perceptions sit in a wider collapse of trust. The Reuters Institute’s Digital News Report 2024 puts trust in news in the UK at 36 per cent – the share of people who say they trust “most news, most of the time”. That is roughly 15 percentage points lower than before the 2016 Brexit referendum, although up slightly from a record low in 2023.
Public service broadcasters such as the BBC, ITV and Channel 4 remain the most trusted individual news brands. Tabloid newspapers and some digital‑only outlets score lowest. But faith in political actors is weaker still: Full Fact’s 2024 report cites polling suggesting that only around 9 per cent of British adults say they trust politicians to tell the truth. Office for National Statistics figures, reported by the Financial Times, show trust in political parties fell to about 12 per cent in 2023, making them the least‑trusted institutions in the UK.
In this climate, concern about misinformation is high. Ipsos research for Full Fact found that 68 per cent of adults are worried about the spread of false or misleading information about news and current affairs, and roughly a quarter say they fear their own political opinions may be based on misinformation – with anxiety higher among heavy social‑media users.
Asked about the impact of false information online, large majorities told Ipsos they believed it had a negative effect on democracy in the UK, on politics more broadly and on people’s health. In later work for its State of Democracy 2025 study, Ipsos found that 75 per cent of Britons are worried about the state of democracy in the next five years, and that around two‑thirds now see fake news as the leading threat.
Where misinformation flows: from broadcasters to TikTok and influencers
Despite the rise of social media, television and established online news outlets still provide much of the UK’s news. A briefing to Parliament summarising Ofcom’s data shows that in 2024 about 70 per cent of adults consumed news via TV and 71 per cent via online websites or apps. When asked which formats they trust most, people tend to put TV and radio at the top and social media or video‑sharing sites at the bottom.
However, Ofcom’s latest news‑consumption report, published in 2024, marked a symbolic shift: for the first time, more people in the UK said they got news online than via television. Just over half of adults now use social media as a news source.
Within that, Facebook remains the single largest platform for news among adults, used by about 30 per cent, followed by YouTube on 19 per cent, Instagram on 18 per cent, X (formerly Twitter) on 15 per cent, WhatsApp on 14 per cent and TikTok on 11 per cent. For younger audiences, video‑led platforms such as TikTok, Instagram and YouTube loom far larger than television news.
When asked who is most to blame for spreading false or misleading information, though, the public points its finger primarily at social networks. In the Ipsos–Full Fact polling, 55 per cent of respondents said social media and video‑sharing platforms were most responsible, followed by political parties and campaigns (21 per cent), the UK government (19 per cent), individual journalists (17 per cent), “regular people” (16 per cent) and traditional media organisations (15 per cent).
The same survey found that the public expects those same platforms, along with government and regulators, to take the lead in reducing misinformation. More than half of respondents said social‑media companies should be responsible for tackling false information, followed closely by the government, regulators and media organisations. Fact‑checking sites and charities were also seen as important, but fewer than four in ten people placed primary responsibility on them.
New research suggests influencers are becoming a major vector for news – and with it, for unreliable information – particularly among younger Britons. An Ipsos survey for the Anthropy conference in 2025 found that only 31 per cent of Britons say they trust news from online influencers and individual content creators “a great deal” or “a fair amount”. Among 16‑ to 34‑year‑olds, that figure rises to 47 per cent.
Yet more than half of 16‑ to 34‑year‑olds – 55 per cent – told Ipsos that they get online news from influencers every day. At the same time, about three‑quarters of people of all ages, and an even higher share of younger adults, believe fake news and misinformation are prevalent in online news from influencers.
The result is a paradoxical information environment for under‑35s: they are among the most sceptical about the prevalence of misinformation in influencer content, but also among the heaviest consumers of that content.
Elections, democracy and the fear of being misled
These trends are not abstract. They feed directly into how people view elections and democratic decision‑making.
Ipsos polling for Full Fact in the run‑up to the July 2024 general election found that about two‑thirds of UK adults were concerned that voters would be misled by false or misleading claims in the campaign. The same share supported the idea of political parties signing up to honesty and transparency standards for their advertising.
A quarter of respondents in that survey said they were worried that their own political opinions might be based on false information. This concern was higher among those who use social media heavily for news and current affairs.
In Ipsos’s wider State of Democracy survey, large majorities of Britons said they believed that false information online is harming democracy, weakening trust and making it harder for citizens to make informed choices. Alongside fake news, people pointed to corruption, lack of accountability and the influence of money in politics as major democratic threats, and showed strong support for tougher rules on social‑media platforms and political donations.
The picture is complicated by low trust in the very institutions that would be expected to police campaign lies. While many people say they want the government, regulators and media to tackle falsehoods, surveys consistently show that politicians and political parties are among the least‑trusted actors in British public life.
AI, deepfakes and sexualised abuse
On top of long‑running worries about misleading headlines and doctored photos, a newer source of alarm has emerged: AI‑generated “synthetic media”, including deepfakes.
Ofcom research in 2024, summarised by the UK Safer Internet Centre and the South West Grid for Learning (SWGfL), found that 43 per cent of adults aged 16 and over said they had seen AI‑generated content online, whether images, videos, audio or text. Among children aged eight to fifteen, the figure was even higher, at around half.
Among those who had seen any synthetic media, about one in seven reported coming across sexual deepfakes. Of these, nearly two‑thirds involved celebrities or public figures. Around 17 per cent were believed to depict under‑18s, 15 per cent appeared to show someone the viewer knew personally, and in 6 per cent of cases the content appeared to show the viewer themselves.
Campaigners and the Children’s Commissioner for England have warned that such material is fuelling a climate of fear and harassment, particularly for school‑age girls, and have called for bans on so‑called “nudification” apps that can generate fake sexual images and for an AI‑specific bill to address deepfake sexual abuse.
Ofcom has published guidance warning that deepfakes can be used not just to humiliate or blackmail individuals but to “demean, defraud and disinform”, including by faking statements by politicians or public figures. Its research suggests that fewer than one in ten adults feels confident they could reliably spot a deepfake.
Economic and behavioural impact
Alongside democratic and personal harms, misinformation has a growing price tag – although, again, the data are patchy.
A widely cited 2019 study by the University of Baltimore and cybersecurity firm CHEQ estimated that fake news costs the global economy about $78bn a year. The authors attributed this to factors such as financial misinformation causing stock‑market volatility, reputational damage for brands caught up in false stories, costs of dealing with security threats and the impact of health misinformation, such as anti‑vaccine claims.
There is no comparable, robust estimate for the UK alone. But as one of the world’s most digital economies, with a large financial sector and high levels of social‑media use, the UK is likely to bear a significant share of those global costs, even if it cannot yet be precisely calculated.
Behaviourally, the public response is marked by a gap between concern and action. The Alan Turing Institute’s research found that more than 85 per cent of respondents were “fairly” or “extremely” concerned about the spread of misinformation on social media. Yet only 3 per cent said they had taken any kind of media‑literacy course, and just 7 per cent had used fact‑checking tools or other resources designed to help them verify claims.
At the same time, there is broad support for platforms intervening more assertively. The Turing survey asked people about a range of behind‑the‑scenes measures that social networks and video platforms might take, such as demonetising content that spreads falsehoods, pushing misleading posts lower down in feeds, moderating or removing content more quickly and “de‑platforming” repeat offenders.
Seventy‑two per cent of respondents said they were comfortable with at least one of these approaches. The appetite for under‑the‑bonnet changes contrasts with more ambivalent attitudes towards overt censorship or government deciding what counts as “true”.
How regulators are responding – and what they are not doing
The UK’s main response to harmful online content has been the Online Safety Act 2023, which gives Ofcom sweeping new powers over internet services. The Act focuses primarily on illegal material – such as terrorism content, child sexual abuse material and fraud – and on content harmful to children. It also imposes systemic obligations on platforms to assess and mitigate risks, improve transparency and support media literacy.
Crucially, the law does not ban misinformation or disinformation in general. Parliament ultimately stopped short of giving Ofcom direct powers over “legal but harmful” speech for adults, amid concerns about free expression and the risk of the state being drawn into adjudicating truth in political arguments.
Instead, the Act expands Ofcom’s media‑literacy duties, requiring it to help users understand the nature and impact of mis‑ and disinformation and to reduce their exposure to it. It also obliges the regulator to set up expert advisory bodies, including the Disinformation and Misinformation Advisory Committee and, more recently, the broader Online Information Advisory Committee.
These committees, which include academics, journalists, civil‑society representatives and former platform executives, are tasked with advising Ofcom on how to approach misinformation within its wider online‑safety remit. Their work is still in its early stages, with agendas and minutes beginning to appear through 2025.
So far, Ofcom has signalled that it intends to use its new powers assertively in some areas: pornography sites have already faced large fines and daily penalties over failures to implement age checks, for example. But on misinformation, the emphasis remains on research, guidance and education rather than direct sanctions.
Critics, including Full Fact, argue that the Online Safety Act has left major gaps around political misinformation, particularly in elections. They point out that most political advertising remains largely unregulated: the Advertising Standards Authority does not intervene in the factual content of political ads, leaving campaign leaflets, social‑media ads and targeted online messaging free to make claims that might be rejected in commercial marketing.
Full Fact has called for the creation of an independent regulator for political advertising, for the non‑statutory Ministerial Code to be put into law, and for rules against deceptive campaign materials such as party literature dressed up to resemble local newspapers. So far, there is little sign of government appetite for comprehensive reform in these areas.
There are also unresolved questions about how the current government is approaching information threats behind the scenes. The previous administration’s controversial Counter Disinformation Unit, which monitored online narratives about issues such as Covid‑19 and elections, has been criticised for a lack of transparency about its activities and contacts with platforms. Ministers have said that any such work must be proportionate and respect free speech, but the details of current arrangements remain opaque.
The measurement problem – and the limits of what is known
Across these debates lies a basic difficulty: no‑one can say with precision how much of what Britons see online is false.
The existing data almost all come from surveys that ask people what they believe they have encountered, or how easy they find it to distinguish truth from falsehood. They tell us that almost everyone has at some point seen something they think is untrue, that substantial minorities have believed false stories, and that worry about the consequences for democracy and health is high.
They do not, on their own, tell us how many posts on Facebook, how many videos on TikTok or how many political ads are factually wrong, misleading by omission or presented without crucial context. Nor do they say how effective different interventions – from media‑literacy classes in schools to downranking of dubious content – actually are at reducing belief in, or sharing of, falsehoods.
Researchers at the Alan Turing Institute, Ofcom and elsewhere are starting to examine the effectiveness of some interventions, but large‑scale, independent trials are still rare. Meanwhile, many of the most detailed data about what circulates on major platforms remain in the hands of the companies themselves, which have scaled back some transparency programmes in recent years.
What is clear is that the UK has entered a prolonged period in which:
- Almost all internet users report encountering misinformation.
- About half of online adults say they have seen misleading or false stories on social media in the past year.
- Large majorities are worried about the impact on democracy, politics and health.
- Trust in news and political institutions is low and has fallen sharply over the past decade.
- Young people are increasingly relying on influencers and social platforms for news, even as they say those spaces are awash with fake content.
- AI‑driven tools such as deepfakes are adding a new layer of complexity, from sexual abuse to fabricated speeches.
At the same time, only small minorities of people are making use of formal tools, training or resources that might help them navigate this landscape. Most say they rely on their own judgement – and many believe that other people are far less careful.
As the UK faces future elections, rapid advances in generative AI and an unsettled geopolitical environment, the struggle over information is likely to intensify. For now, the evidence shows a public that feels besieged by misinformation but is unsure what to trust, a regulatory system still finding its footing and a large gap between the scale of concern and the modest steps taken so far to address it.
Comments
No comments yet. Be the first to comment!