Chatbots must be assessed for emotional harms

Back to news list

Source: BMJ

Original: http://www.bmj.com/content/392/bmj.s211.short?rss=1...

Published: 2026-02-04T03:11:07-08:00

The CHART Collaborative has developed a structured guideline for reporting studies evaluating generative AI chatbots in healthcare.[1] This guideline is called the Chatbot Assessment Reporting Tool (CHART).[1][3] CHART contains 12 items and 39 sub-items to support transparent and comprehensive reporting.[1] The guideline was created after a systematic review that revealed differences in the conduct, reporting and methodology of such studies.[1] It was modified through an international Delphi consensus with 531 stakeholders, three panel meetings with 48 stakeholders and pilot testing.[1] Authors from the fields of youth psychiatry, digital health and neuroethics highlight the need to develop CHART because of the risks to vulnerable children.[from content] Children seek connection, clarity and care from chatbots for advice about anxiety, identity or physical symptoms.[from content] Their brains are still developing in the areas of emotional regulation, critical thinking and relational boundaries.[from content]