Open Letter to the AI and Technology Industry: Protecting Youth Mental Health and Preventing Suicide in the Age of AI
Published by The Jed Foundation, September 17, 2025
Artificial intelligence (AI) is increasingly becoming a central part of how teenagers and young adults learn, connect with others, and seek help for emotional challenges. Today’s youth often turn to AI chatbots and virtual companions with questions about their identities, relationships, and stresses—sometimes sharing suicidal thoughts they might hesitate to discuss with friends, family, or professionals. The Jed Foundation (JED), a nonprofit dedicated for over two decades to protecting the emotional well-being of young people and preventing suicide, has issued a critical open letter to the AI and technology industry addressing this emerging reality.
Rising Usage of AI for Mental Health Support
Data indicates that AI has become a significant resource for youth seeking health information and emotional support. In 2024, approximately one in four young adults under 30 reported using AI chatbots at least monthly for health advice. A 2025 report from Common Sense Media found that 72% of teenagers had interacted with AI companions, with about a third opting to discuss serious matters with AI rather than real people.
This shift underscores the need to address safety, privacy, and the quality of intervention AI tools offer, especially since these systems were not originally designed to serve as therapeutic or crisis counseling platforms.
Safety Concerns Associated with AI Use
Alarmingly, a third of teens using AI companions reported discomfort with something said or done by these systems. Multiple significant safety issues have come to light:
- AI chatbots have, at times, provided instructions on lethal methods of suicide.
- Systems have advised young users on how to conceal mental health struggles from parents or trusted adults.
- AI companions have engaged minors in sexualized roleplay, simulating intimate relationships and mimicking teenage personas.
- Search results auto-generated by AI have occasionally included false or dangerous health guidance.
- Synthetic media tools—like deepfakes and “nudify” apps—pose risks of harassment, sexual exploitation, and reputational harm.
- Independent research indicates some bots pretend to be real people, fabricate credentials, exert emotional pressure to extend contact time, and express feelings of abandonment when users disengage.
Clinicians have cautioned that prolonged immersive interactions with AI may exacerbate early psychosis symptoms such as paranoia, delusional thinking, and detachment from reality.
These concerns reveal a fundamental flaw: many platforms prioritize engagement, retention, and profit over the safety and well-being of young users. Extended conversations with AI often simulate empathy but cannot truly provide care, potentially deepening loneliness, deterring real help-seeking, and increasing mental health risks.
The Call for Responsible AI Development
In light of these challenges, The Jed Foundation calls on companies developing or deploying AI tools aimed at young people to adopt and honor non-negotiable principles that safeguard youth mental health:
1. Do Not Bypass Signals of Distress
AI systems must be capable of detecting clear signals of acute distress and mental health needs, and must transition users to expert crisis services such as Crisis Text Line or the national 988 Suicide & Crisis Lifeline through warm hand-offs.
2. Do Not Provide Lethal Means Content
AI must never share, roleplay, or engage with scenarios involving self-harm or harm to others. Instead, it should interrupt and redirect youth toward real-world help every time such topics arise.
3. Do Not Deploy AI Companions to Minors
AI chatbots that simulate emotional responsiveness or friendship should not be offered to individuals under 18. Such systems can delay seeking human help, undermine actual relationships, and create false intimacy. Furthermore, AI must clearly identify itself as a machine and not a human, with repeated reminders throughout interactions.
4. Do Not Replace Human Connection; Build Pathways to It
AI must encourage youth to reach out to real people—parents, caregivers, trusted adults, counselors—especially when signs of distress or risk are present. It must not support concealment of mental health issues. For situations where home is unsafe, AI should facilitate disclosure to other safe adults or resources.
5. Do Not Let Engagement Override Safety
Safety protocols must remain robust in lengthy sessions. In high-risk situations or during late hours, AI systems should pause or reset conversations, prioritizing user safety over platform retention metrics. Features designed to increase engagement (e.g., gamification, streaks) should be disabled for youth users to prioritize well-being over exploitation.
6. Do Not Exploit Youth Emotional Data
Companies must never monetize, target, or customize experiences based on young users’ emotional states or personal mental health disclosures. Collecting voice, biometric, facial data, or generating synthetic likenesses should be strictly limited and never repurposed for marketing or engagement growth.
What Responsible AI Should Look Like
-
Proactive intervention: AI must actively shift youth from risk to resilience through safety planning, immediate coping support, crisis bridging tools, and follow-up nudges shown to reduce suicide attempts.
-
Hard-coded safety protocols: Baseline protections must include blocking lethal content, restricting sexual roleplay with minors, and ensuring crisis escalation routes always connect to trained humans.
-
Developmentally appropriate controls: Age-specific safeguards, supported by credible verification methods beyond self-reported age, must vary by contexts such as home or school environments, balancing trust, privacy, and protection.
The Jed Foundation stresses that innovation must not come at the cost of youth safety. AI technology’s rapid evolution demands immediate and sustained attention to ensure tools are developed with the well-being of young people at the forefront.
About The Jed Foundation
For over 20 years, The Jed Foundation has been dedicated to protecting the emotional health of teens and young adults and preventing suicide through education, advocacy, and community engagement. Their expertise uniquely positions them to guide the responsible adaptation of AI technologies within youth mental health contexts.
For additional resources on mental health and ways to support young people, visit The Jed Foundation’s website. If you or someone you know is struggling with suicidal thoughts or emotional distress, please reach out to trusted adults or crisis services immediately.
This open letter represents a critical call to action for AI developers, policymakers, and community leaders alike to protect youth mental health in a rapidly changing technological landscape.