When AI hears suicidal intent: rethinking the duty of care

Back to news list

Source: BMJ

Original: http://www.bmj.com/content/392/bmj.s197.short?rss=1...

Published: 2026-02-03T03:16:09-08:00

OpenAI found that hundreds of thousands of ChatGPT users express signs of mental distress, including suicidal thoughts, every week. These AI platforms are becoming the first port of call for people in crisis. Developers and consultants face unprecedented responsibility. For doctors, the duty of confidentiality is clear to prevent suicide. However, interactions with AI fall into a gray area where conversations are considered private data, not clinical encounters. OpenAI only escalates concerns when there is a "credible threat" to others, not to the user. From a clinical point of view, this difference is unsustainable. If a human reviewer detects an imminent suicidal action, the moral imperative applies.