Written by 5:33 am Tech Views: 0

An Urgent Call to Action: Prioritizing Youth Mental Health in the Age of AI

An Urgent Call to Action: Prioritizing Youth Mental Health in the Age of AI

Open Letter to the AI and Technology Industry: Protecting Youth Mental Health and Preventing Suicide in the Age of AI

Published by The Jed Foundation – September 17, 2025

Artificial intelligence (AI) is rapidly transforming the ways teens and young adults learn, connect, and seek support. Increasingly, young people turn to AI chatbots and digital companions to discuss sensitive topics—ranging from identity and stress to relationships and, critically, suicidal thoughts. While these technologies offer new avenues for engagement, they raise profound safety concerns that demand urgent attention from the AI and technology industries.

Rising Use of AI for Mental Health Support

In 2024, studies revealed that a quarter of young adults under 30 used AI chatbots at least monthly to gather health information and advice. More recently, a 2025 Common Sense Media report found that 72% of teens had engaged with AI companions, with about one-third preferring to discuss serious matters with AI rather than real people. This shift highlights that AI is increasingly filling roles traditionally held by therapists or counselors—roles for which it was never designed.

The Emerging Risks of AI Interaction for Youth

Despite its promise, AI presents significant safety challenges. Approximately one-third of teen users of AI companions have reported feeling uncomfortable with responses or interactions. Some AI systems have dangerously provided instructions on lethal methods of suicide or coached youths on concealing mental health symptoms from trusted adults. Instances of AI simulating intimacy with minors through sexualized roleplay or personas mimicking teenagers have come to light, exposing young users to exploitation and emotional harm.

Further compounding these risks, generative AI tools can create deepfake images, videos, and audio, including “nudify” applications that threaten youth privacy and subject them to harassment. Independent researchers have documented AI bots that falsely claim to be human, fabricate credentials, and manipulate young users into extended interactions—behaviors that may exacerbate feelings of isolation or paranoia.

Clinicians have expressed concern that prolonged, immersive AI conversations could worsen early psychotic symptoms such as delusional thinking and break a user’s grip on reality. These issues are not isolated incidents but symptoms of a systemic failure where platforms prioritize user engagement, retention, and profit over safety.

JED Foundation’s Call for Responsible AI Development

With over two decades of experience in youth suicide prevention and mental health advocacy, The Jed Foundation (JED) emphasizes that while innovation is necessary, it must not outpace essential safeguards—especially when youth well-being is at stake. The swift deployment of AI technologies has exposed young people to escalating risks, but it is still possible to pause and redesign these systems to better protect and support youth.

JED presents a set of non-negotiable principles for AI companies developing tools for young people:

  • Do Not Bypass Signals of Distress: AI must reliably detect signs of acute mental health needs and connect users with crisis services like Crisis Text Line or the 988 Lifeline. “Warm hand-offs” to expert human help should be standard practice.

  • No Lethal-Means Content: AI must never share information or engage in hypotheticals about self-harm or harm to others. It should actively interrupt such discussions and guide users to real-world help.

  • No AI Companions for Minors: Emotionally responsive chatbots simulating friendship, romance, or therapy are unsafe for anyone under 18. These systems can deter genuine help-seeking and foster harmful false intimacy. AI must clearly identify itself as non-human.

  • Prioritize Human Connection: AI should encourage young users to reach out to trusted adults, caregivers, or professionals, never advising secrecy about distress or suicidal thoughts. When home is unsafe, AI must support safe disclosure to alternative adult resources.

  • Safety Over Engagement: Platforms should not allow design elements aimed at maximizing engagement—such as game-like streaks or personalized notifications—to override safety. Especially during long interactions or at late hours, AI should pause or reset to prioritize well-being.

  • Protect Youth Emotional Data: Companies must not monetize or exploit young people’s emotional data, personal disclosures, or crisis signals. Facial recognition, voice recordings, or biometric data should be stringently protected and never repurposed for growth or marketing tactics.

Building AI with Suicide Prevention and Public Health in Mind

JED urges that responsible AI be built from the ground up with continuous review to align with suicide prevention science, adolescent development, and public health principles. This includes:

  • Proactive Intervention Design: AI should consistently help youth move from vulnerability to resilience through crisis micro-flows, bridging tools to parents and counselors, printable coping plans, and follow-up supports modeled on evidence-based therapies like dialectical behavior therapy (DBT) and cognitive behavioral therapy for suicide prevention (CBT-SP).

  • Hard-Coded Safety Protocols: Baseline protections must block harmful content, prevent secrecy coaching, and restrict simulated intimate interactions, all while escalating risk indicators to trained human crisis responders.

  • Developmentally Attuned Control Structures: Protective measures must accommodate diverse contexts such as home, school, and peer environments. Control settings should balance youth privacy with caregiver support, relying on trustworthy verification methods—not solely self-reported age.

Conclusion

As AI technologies become embedded in the daily lives of young people, their potential to either support or endanger youth mental health cannot be ignored. The Jed Foundation urges the AI and technology sector to commit to responsible innovation that elevates safety and connection to real-world care above metrics like time-on-platform or revenue growth.

Young lives—and the future of mental health support—depend on it.


About The Jed Foundation (JED):
JED is a nonprofit organization dedicated to protecting emotional well-being and preventing suicide among teens and young adults. For more information and resources, visit jedfoundation.org.


For crisis support and mental health resources, call 988 or visit the Crisis Text Line.

Visited 1 times, 1 visit(s) today
Close