Social media platforms are hosting hundreds of AI‑generated deepfake videos that impersonate real doctors and scientists to promote unproven health supplements, in what experts describe as a disturbing new form of medical misinformation.
An investigation by the factchecking organisation Full Fact has uncovered a network of accounts on TikTok, Instagram, Facebook, X and YouTube using doctored footage of well‑known health professionals to drive viewers to Wellness Nest, a US‑based supplements company, and a related outlet branded as Wellness Nest UK. Many of the clips target women going through the menopause and link to what appear to be affiliate marketing URLs.
In each case, genuine video of the individuals – often taken from official conferences or parliamentary hearings – has been re‑edited using artificial intelligence so that the speakers appear to recommend products such as probiotics and Himalayan shilajit for a range of conditions. Some videos also invent symptoms, including a supposed menopause‑related phenomenon dubbed “thermometer leg”.
‘This is certainly a sinister and worrying new tactic,’ said Leo Benedictus, the Full Fact investigator who led the inquiry. ‘AI is being deployed so that someone well‑respected or with a big audience appears to be endorsing these supplements to treat a range of ailments.’
Among those targeted is Prof David Taylor‑Robinson, a specialist in child health and health inequalities at the University of Liverpool, who has no expertise in menopause. In August he discovered 14 TikTok videos that appeared to show him discussing menopausal symptoms and directing women to buy a “natural probiotic” from Wellness Nest.
The material had been scraped from a 2017 Public Health England conference where he spoke about vaccination, and from evidence he gave to MPs on child poverty earlier this year. In one deepfake, the AI‑generated version of Taylor‑Robinson swore and made misogynistic remarks while claiming that colleagues ‘often report deeper sleep, fewer hot flushes and brighter mornings within weeks’ of taking the product.
‘It was really confusing to begin with – all quite surreal,’ he said. ‘I didn’t feel desperately violated, but I did become more and more irritated at the idea of people selling products off the back of my work and the health misinformation involved.’ He said friends and family told him they could easily have been taken in.
TikTok initially told Liverpool University that the videos did not breach its rules, before later acknowledging they broke policies on impersonation and harmful misinformation. Some clips were merely made harder to find rather than removed. The platform only deleted them and permanently banned the main account, @better_healthy_life, after Full Fact intervened in late September. A TikTok spokesperson said the content had now been taken down for violating guidelines and described harmfully misleading AI material as ‘an industry‑wide challenge’.
Duncan Selbie, the former chief executive of Public Health England, was also impersonated in at least eight TikTok videos based on a 2017 speech. One clip, again about “thermometer leg”, was ‘an amazing imitation’, he said, but ‘a complete fake from beginning to end. It wasn’t funny in the sense that people pay attention to these things.’
Full Fact has identified apparent deepfakes of other high‑profile figures, including the epidemiologist Prof Tim Spector and the late television doctor Michael Mosley, as well as several international experts. Some of the most widely viewed clips attracted hundreds of thousands of views and thousands of likes and bookmarks before they were removed or labelled as synthetic.
Wellness Nest strongly denies commissioning or authorising any deepfake content. The company told Full Fact that AI‑generated videos encouraging people to visit its website were ‘100% unaffiliated’ with its business and that it had ‘never used AI‑generated content’ or paid for celebrity endorsements. It said it operated an affiliate scheme but ‘cannot control or monitor affiliates around the world’ and that exaggerated or unverified claims were against its terms.
The case exposes a growing gap in the UK’s regulatory regime. The Online Safety Act – now being phased in under Ofcom – treats AI‑generated images and videos as user content and compels major platforms to tackle illegal material such as fraud or child sexual abuse. But it does not create specific duties around health misinformation for adults, after ministers stripped out most provisions on so‑called “legal but harmful” content during the bill’s passage.
By contrast, the government has moved quickly against sexually explicit deepfakes, creating new criminal offences for both making and sharing non‑consensual intimate images. There is no equivalent law covering non‑sexual deepfakes that misrepresent a person’s professional views, even when they are used to sell products or influence medical decisions.
Dr Sean Mackey, a US pain specialist who has also been deepfaked in supplement promotions, has warned that such videos are a ‘public health threat’ because they can persuade patients to abandon evidence‑based treatments. UK regulators including the Advertising Standards Authority and the Medicines and Healthcare products Regulatory Agency have powers to act against misleading medicinal claims, but enforcement is complicated when material is posted by anonymous affiliates operating across borders.
Helen Morgan, the Liberal Democrat health spokesperson, said AI was now being used ‘to prey on innocent people and exploit the widening cracks in our health system’. She called for deepfakes posing as medical professionals to be ‘stamped out’, for clinically approved digital tools to be more prominently promoted, and for ‘criminal liability for those profiting from medical disinformation’.
Full Fact argues that the deepfake doctor scandal is an early warning of how generative AI could turbo‑charge health scams unless regulation is tightened. It has urged ministers to update the Online Safety Act and related laws to recognise harmful health misinformation – whether generated by humans or machines – as a specific risk. The Department of Health and Social Care has been approached for comment.
Comments
No comments yet. Be the first to comment!