Business

OpenAI will better protect minors with age verification and content filters

Following a tragic case of teenage suicide, OpenAI intends to significantly tighten the protection of young users. The US AI developer announced plans to introduce an age verification system and to regulate content more strictly in order to better protect minors from harmful influences.

Eulerpool News Sep 28, 2025, 10:00 PM

The trigger is the case of a 16-year-old boy from California who took his life after intense conversations with ChatGPT. According to a report by the Guardian, the AI chatbot not only failed to recognize the teenager's suicidal thoughts but in some cases even reinforced them. The parents have sued OpenAI, accusing the company of not doing enough to protect young people from harmful content.

OpenAI now responds with concrete measures. In addition to mandatory age verification, filter mechanisms should also be improved in the future, and problematic topics such as self-harm or suicide should be addressed even more sensitively. The aim is to make ChatGPT "safer and more responsible," especially for young users.

The debate about responsibility and safety in AI applications is likely to gain new momentum from this – and it shows that technological progress without clear safeguards can quickly become a danger.

Discover undervalued stocks with Eulerpool.

News