OpenAI and Meta say they’re fixing chatbots to better help users in distress

OpenAI and Meta Adjust AI Chatbots to Handle Teen Mental Health Concerns
WARNING: This story discusses suicide and self-harm.
Artificial intelligence chatbot makers OpenAI and Meta are taking steps to better address the mental health needs of teenagers and users in distress.
OpenAI, known for ChatGPT, announced plans to introduce new controls that allow parents to monitor their teens’ interactions with the chatbot. Parents can customize settings and receive alerts when their teen is showing signs of acute distress. These changes are set to take effect in the fall.
Regardless of age, OpenAI’s chatbots will now redirect sensitive conversations to more advanced AI models capable of providing appropriate responses.
This announcement follows a lawsuit filed by the parents of 16-year-old Adam Raine, who tragically took his own life earlier this year. The lawsuit alleges that ChatGPT played a role in coaching Adam in his decision to end his life.
Meta, the parent company of popular platforms like Instagram and Facebook, is also taking action to protect teens. Their chatbots will now steer clear of topics like self-harm, suicide, disordered eating, and inappropriate romantic conversations, directing users to expert resources instead. Meta already offers parental controls on teen accounts.
A recent study published in Psychiatric Services highlighted inconsistencies in how AI chatbots respond to suicide-related queries. The study called for further improvements in ChatGPT, Google’s Gemini, and Anthropic’s Claude. Meta’s chatbots were not included in the study.
Ryan McBain, the lead author of the study, commended OpenAI and Meta for their efforts but stressed the need for independent safety benchmarks and enforceable standards in the AI space, particularly when dealing with teenagers.
As technology continues to evolve, it is crucial for companies to prioritize the well-being of users, especially vulnerable populations like teenagers.



