June 1, 2025
In an era where the line between human and artificial intelligence grows increasingly blurred, a troubling trend has emerged across social media platforms: the rise of AI influencers spreading potentially harmful misinformation about everything from mental health to beauty standards. Recent investigations have revealed a digital landscape where algorithms, not expertise, determine what advice reaches millions of vulnerable users.
A groundbreaking Guardian investigation published yesterday found that more than half of the top 100 trending mental health advice videos on TikTok contain misinformation. Meanwhile, a parallel trend documented by The Washington Post shows people turning to AI chatbots for brutally honest beauty assessments that friends and family won’t provide. Together, these developments paint a concerning picture of our growing reliance on artificial intelligence for deeply personal guidance.
The Mental Health Misinformation Crisis
The Guardian’s investigation into TikTok’s mental health content revealed a digital Wild West where unqualified influencers—both human and AI-generated—dispense questionable advice to millions of viewers. Among the dubious recommendations:
- Eating an orange in the shower to reduce anxiety
- Taking supplements with limited evidence bases, such as saffron and holy basil
- Methods claiming to heal trauma within an hour
- Guidance that presents normal emotional experiences as signs of serious mental disorders
David Okai, a consultant neuropsychiatrist and researcher at King’s College London who reviewed the videos, noted that many posts misused therapeutic language, using terms like “wellbeing,” “anxiety,” and “mental disorder” interchangeably, “which can lead to confusion about what mental illness actually entails.”
The investigation found that 52 out of 100 videos under the #mentalhealthtips hashtag contained some form of misinformation, with many others being vague or unhelpful. This digital environment prioritizes “short-form, attention-grabbing soundbites” over “the more nuanced realities of qualified therapeutic work.”
The “Harsh Truth” Phenomenon
Simultaneously, The Washington Post has documented a growing trend of individuals seeking brutal honesty about their appearance from AI chatbots like ChatGPT. Unlike human friends who might soften feedback to protect feelings, these AI systems deliver unfiltered assessments that can be both specific and harsh.
Ania Rucinski, 32, turned to ChatGPT after feeling that friends wouldn’t tell her how she could improve her appearance compared to her “godlike” boyfriend. This trend reflects a concerning shift in how people seek validation and advice, bypassing human relationships in favor of algorithmic judgments.
The Psychological Impact
The convergence of these trends creates a perfect storm of potential psychological harm. Experts have identified several concerning patterns:
Type of AI Influence | Potential Harm | Expert Assessment |
---|---|---|
Mental health misinformation | Delayed proper treatment, self-misdiagnosis, inappropriate self-treatment | “This is providing misinformation to impressionable people and can also trivialize the life experiences of people living with serious mental illnesses.” – Dan Poulter, former health minister and NHS psychiatrist |
Beauty standard reinforcement | Body image issues, decreased self-esteem, pursuit of unnecessary cosmetic procedures | Comment sections show “strong skepticism and criticism towards using AI tools like ChatGPT for personal feedback on appearance.” – Washington Post reader summary |
Oversimplification of complex issues | False expectations about treatment timelines, feelings of failure when quick fixes don’t work | “Each video is guilty of suggesting that everyone has the same experience… that can easily be explained in a 30-second reel.” – Amber Johnston, British Psychological Society-accredited psychologist |
The Regulatory Response
The proliferation of AI-influenced content has caught the attention of lawmakers and regulatory bodies. Chi Onwurah, a Labour MP who chairs a technology committee investigating misinformation on social media, expressed “significant concerns” about the effectiveness of the Online Safety Act in “tackling false and/or harmful content online, and the algorithms that recommend it.”
“Content recommender systems used by platforms like TikTok have been found to amplify potentially harmful misinformation, like this misleading or false mental health advice,” Onwurah added. “There’s clearly an urgent need to address shortcomings in the OSA to make sure it can protect the public’s online safety and their health.”
TikTok defended its platform, stating it is “a place where millions of people express themselves, come to share their authentic mental health journeys, and find a supportive community.” The company claimed there are “clear limitations to the methodology of this study, which opposes this free expression and suggests that people should not be allowed to share their own stories.”
The Human Cost
Behind the statistics and regulatory debates are real people affected by this digital misinformation ecosystem. Mental health professionals report seeing patients who have self-diagnosed based on TikTok videos or delayed seeking proper treatment because they believed in quick fixes promoted by influencers.
“I had a patient who spent months trying various TikTok anxiety ‘hacks’ before finally seeking professional help for what turned out to be a treatable anxiety disorder,” said Dr. Melissa Shepard, a psychiatrist interviewed by The Guardian. “By that point, their symptoms had worsened significantly, and they felt like a failure because none of the social media solutions had worked.”
Similarly, therapists report seeing clients devastated by AI beauty assessments that reinforced insecurities or suggested changes that aligned with narrow, often Eurocentric beauty standards.
The Path Forward
As AI continues to permeate our digital lives, experts suggest several approaches to mitigate the potential harms:
- Digital literacy education: Teaching users to critically evaluate the source and credibility of online advice
- Platform responsibility: Implementing stronger content moderation and verification for health-related information
- Regulatory frameworks: Updating laws to address the unique challenges posed by AI-generated content
- Expert involvement: Encouraging collaboration between platforms and qualified health professionals
- Transparency requirements: Mandating clear disclosure when content is AI-generated or lacks expert review
“Social media can be a powerful tool for increasing awareness and reducing stigma around mental health,” said Professor Bernadka Dubicka, online safety lead for the Royal College of Psychiatrists. “But it’s important that people are able to access up-to-date, evidence-based health information from trusted sources.”
As we navigate this new frontier where algorithms increasingly influence our self-perception and wellbeing, the challenge remains finding a balance between technological innovation and human welfare. The current landscape suggests we have significant work to do in striking that balance.
FAQ: Understanding AI Influencers and Digital Misinformation
What exactly are AI influencers?
AI influencers can refer to two distinct phenomena: fully artificial computer-generated personalities that exist only digitally but have social media followings, and human influencers who heavily rely on AI tools to create content or who promote AI-generated advice. Both types are contributing to the spread of unverified information online.
How prevalent is mental health misinformation on social media?
According to The Guardian’s investigation, more than half (52%) of the top 100 trending videos under the #mentalhealthtips hashtag on TikTok contained some form of misinformation. Many others were vague or potentially unhelpful.
Why are people turning to AI for beauty advice?
Many people feel that friends and family won’t give them honest feedback about their appearance out of politeness or concern for their feelings. AI systems like ChatGPT don’t have these social inhibitions and will provide direct, sometimes harsh assessments when prompted to do so.
What are some common types of mental health misinformation spread by influencers?
Common types include oversimplified “quick fixes” for complex conditions, misuse of clinical terminology, overgeneralization of personal experiences, promotion of supplements without sufficient evidence, and content that pathologizes normal emotional experiences as mental disorders.
How can I identify reliable mental health information online?
Look for content created by qualified mental health professionals (psychologists, psychiatrists, licensed therapists), check if claims are backed by scientific research, be wary of “one-size-fits-all” solutions, and verify information across multiple reputable sources. Organizations like the National Institute of Mental Health, the American Psychological Association, and the Royal College of Psychiatrists provide evidence-based information.
What are platforms doing to address this issue?
TikTok says videos are taken down if they discourage people from seeking medical support or promote dangerous treatments. When people in the UK search for terms linked to mental health conditions on TikTok, they are directed to NHS information. However, critics argue these measures are insufficient given the scale of the problem.
Is there any regulation of AI-generated content or advice?
Regulation is still catching up to the technology. In the UK, the Online Safety Act aims to address harmful content, but many experts and lawmakers believe it has significant shortcomings when it comes to AI-generated content and algorithmic recommendation systems.
Can AI chatbots provide legitimate mental health support?
While AI chatbots can provide some forms of support and may increase accessibility, experts emphasize they cannot replace professional mental health care. They lack the ability to provide personalized therapy based on a comprehensive understanding of an individual’s unique circumstances and may not recognize when someone needs urgent intervention.
What should I do if I’ve been following advice from social media that might be misinformation?
If you’ve been following mental health or medical advice from social media, it’s advisable to consult with a qualified healthcare professional. Don’t discontinue any current treatment without professional guidance, and be open with your provider about what advice you’ve been following.
How might this trend evolve in the future?
Experts predict we’ll see increasingly sophisticated AI-generated content that becomes harder to distinguish from human-created material. This may necessitate new forms of digital literacy education, stronger platform policies requiring disclosure of AI involvement, and updated regulatory frameworks specifically addressing AI-generated health information.