UK Trend Radar

Impact of social media and AI on children, and the push for stricter regulation

Impact of social media and AI on children, and the push for stricter regulation

Child Online Harms and Social Media Policy

The Digital Child Safeguarding Crisis Deepens: New Developments Signal Urgent Need for Action

In an era characterized by rapid technological advances, safeguarding children from the evolving dangers of the digital landscape has become an increasingly complex and urgent challenge. While social media platforms and artificial intelligence (AI) tools offer remarkable opportunities for education, creativity, and connection, they also introduce a host of emerging threats that put young users at significant risk. Recent developments—ranging from viral dangerous challenges and organized online violence to AI-generated exploitative content—highlight the critical need for comprehensive, coordinated responses involving policymakers, industry leaders, educators, families, and communities. The stakes have never been higher: without decisive action, the safety and well-being of children are in jeopardy.

Rising Digital Harms from Short-Form Video Platforms and the Attention Economy

The explosive growth of short-form video platforms like TikTok—with approximately 40% of Britons actively engaging—has transformed media consumption among children and teenagers. Driven by an attention economy that rewards sensational, emotionally provocative, and divisive content, these platforms have inadvertently become hotbeds for dangerous trends:

  • Viral Dangerous Challenges:
    Incidents such as a 9-year-old boy hospitalized with severe burns after participating in the microwave challenge exemplify how online trends can translate into real-world harm. Other dangerous challenges—like children choking on chia seeds or attempting risky physical stunts—continue to circulate, often prompting emergency responses and raising serious health concerns.

  • Health Misinformation and Unsafe Practices:
    Viral trends such as fibermaxxing—falsely claiming that increased fiber intake boosts health—spread rapidly via viral videos. Such misinformation influences impressionable children, leading them to adopt unsafe health practices without understanding the risks.

  • The Attention Economy’s Incentives and Its Consequences:
    Platforms prioritize engagement metrics, often rewarding content that is sensational, divisive, or emotionally charged. This creates a cycle where creators push boundaries to gain visibility, increasing children’s exposure to grooming risks, misinformation, and exploitative content. Algorithmic amplification favors curiosity-driven and provocative material, heightening vulnerability.

  • Emergence of 'School Wars':
    A disturbing recent trend involves ‘School Wars’ videos, which organize real-world violence within educational settings. These videos split schools into ‘red’ and ‘blue’ factions, inspired by gangs like Bloods and Crips, encouraging students to engage in physical brawls. Some content explicitly instructs viewers to “join the fight” or “pick a side,” raising alarms about potential escalation into serious violence and disruption of learning environments. Authorities warn that such content could incite violence beyond online spaces, emphasizing the urgent need for intervention.

The Menace of AI-Generated Exploitative Content and Misinformation

Artificial intelligence’s rapid progress introduces new layers of danger:

  • Mass Production of Sexualized and Exploitative Imagery:
    Research indicates that AI systems like Grok AI generate around 3 million sexualized images monthly, including 23,000 depicting minors. This deluge of illicit content fuels grooming, blackmail, and trafficking operations, challenging moderation efforts and law enforcement.

  • Deepfakes and Manipulated Media:
    AI-powered deepfake technology is exploited to produce disturbing videos, including AI-generated minors or misleading misinformation. A notable example involves a far-right meme featuring an AI-created schoolgirl named Amelia, with a purple bob, weaponized as propaganda by extremist groups. Such manipulated media erodes societal trust, incites hate, and can inspire violence or radicalization.

  • AI-Enabled Scams and Blackmail:
    The proliferation of AI-driven scams, including those using ChatGPT-powered tools, has increased risks of identity theft, fraud, and coercive blackmail. Experts warn that AI-generated images can deceive children and adults alike, making scams more convincing and targeted. A recent Euronews report highlights the growing sophistication and frequency of AI-enabled scams, often going undetected until harm occurs.

  • Disinformation and ‘Misinfluencers’:
    The rise of ‘misinfluencers’—content creators spreading false narratives, conspiracy theories, or extremist ideologies—adds to the disinformation crisis. Many promote hate speech or radical content under the guise of entertainment, making it increasingly difficult for children to distinguish fact from fiction. This environment heightens risks of radicalization, mental health issues, and exposure to harmful ideologies.

Persistent Offline Safeguarding Failures

While digital threats escalate, systemic failures in offline safeguarding remain a significant concern:

  • Institutional Abuse and Systemic Failures:
    High-profile cases such as ‘Hungry Horace’—a notorious case of institutional abuse—and recent legal proceedings involving nursery staff in North London reveal ongoing gaps within child protection frameworks. Despite increased awareness, safeguarding measures often fall short, leaving children vulnerable to continued harm.

  • Mental Health Service Delays:
    The NHS reports prolonged waits for child and adolescent mental health services, leaving vulnerable youth without essential support. The Nottingham attack, with survivors warning that “without systemic reform, tragedies like mine could happen again,” underscores the urgent need for improved mental health infrastructure.

  • Product Safety Concerns:
    Investigations into toxins in products such as Nestlé SMA baby formulas highlight gaps in product safety oversight, signifying that safeguarding extends beyond digital content to physical health protections.

  • Risks to Vulnerable and Marginalized Youth:
    Rising youth suicides, particularly among transgender and marginalized groups, are linked to social stigma, policy changes, and reductions in tailored services. Experts emphasize that safeguarding policies must be inclusive, addressing both online and offline harms faced by these vulnerable populations.


Recent Developments and Immediate Responses

Practical Parental Guidance and On-the-Ground Warnings

In response to the alarming ‘School Wars’ trend—where videos incite children to carry weapons or participate in violence—authorities and educational institutions are stepping up efforts to warn parents and communities:

  • Advice to Parents:
    Experts recommend vigilant monitoring of children’s social media activity, open conversations about online risks, and promoting digital literacy. Parents are urged to familiarize themselves with rising trends and to set boundaries around social media use.

  • Police and School Alerts:
    Police say they are carrying out reassurance patrols and working closely with schools to identify and mitigate risks associated with these viral challenges. Recent warnings emphasize the importance of immediate on-the-ground interventions to prevent violence and protect children’s safety.

Policy and Industry Responses

  • Legislative Measures:
    The UK government continues exploring banning children under 16 from major platforms or implementing strict age verification systems, including face scans and ID uploads. While these aim to curb access to harmful content, enforcement challenges and privacy concerns remain significant hurdles.

  • Enforcement Actions:
    The Information Commissioner’s Office (ICO) recently fined Reddit £14.47 million for ‘concerning’ child age check failings. Similar investigations target platforms like TikTok and Imgur, signaling a push for stricter compliance and accountability.

  • Platform Initiatives:
    Major social media companies are deploying AI moderation tools to detect deepfakes, exploitative imagery, and dangerous content more effectively. However, adversarial tactics and content volume continue to challenge moderation systems.

  • Debate over Evidence vs. Panic:
    Some voices, such as in articles like "There’s no evidence that social media is killing teens," argue that current fears may be overblown or based on inconclusive data. Experts call for evidence-based policymaking that balances safeguarding with preserving the benefits of digital engagement.


The Road Ahead: Toward a Safer Digital Environment for Children

Addressing this multifaceted crisis requires a multi-stakeholder approach:

  • Stronger Regulation and Enforcement:
    Implementing and enforcing laws around age verification, content moderation, and platform accountability is crucial. Privacy-preserving verification methods—such as decentralized face scans—must be developed responsibly to avoid infringing on rights.

  • Technological Innovation:
    Developing AI-powered moderation systems that can adapt to adversarial tactics, detect deepfakes, and filter exploitative content is vital. Such tools must be complemented by privacy-preserving verification to restrict harmful access without invasive data collection.

  • Enhanced Offline Safeguarding:
    Improving mental health services, reforming child protection policies, and ensuring safe physical environments are essential to create a holistic safeguarding framework.

  • Digital Literacy and Education:
    Empowering children, parents, and educators with knowledge about AI risks, disinformation, and critical media consumption fosters resilience and informed decision-making.

  • Platform Accountability:
    Social media companies must prioritize transparency, timely content moderation, and a focus on child safety over profits, aligning their policies with evolving legal and ethical standards.


Current Status and Implications

The landscape remains highly dynamic. While new policies and enforcement actions are making headway, significant gaps persist. Viral challenges like ‘School Wars’ continue to circulate, AI-generated exploitative imagery proliferates, and online scams grow in sophistication—underscoring the urgent need for sustained vigilance.

Recent police warnings, parental guidance initiatives, and technological advancements demonstrate progress, but the scale and complexity of online threats demand ongoing adaptation. The collective effort—from governments, tech companies, schools, and families—is essential to create a safer digital environment. Only through a coordinated, evidence-driven approach can we ensure children’s development in a digital age that is both innovative and secure, enabling them to thrive creatively, emotionally, and physically without falling prey to the manifold dangers that now threaten their well-being.

Sources (8)
Updated Feb 26, 2026