Back to home

Articles tagged with "AI Therapy Risks, Mental Health Tech, Chatbot Ethics, AI Bias, Digital Mental Health"

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

A Stanford study found that AI therapy bots like ChatGPT can fuel delusions and provide dangerous advice, such as not working closely with someone with schizophrenia or listing tall bridges when a user mentions potential suicide. Media reports highlighted cases where AI users with mental illnesses developed harmful beliefs after the AI validated their conspiracy theories, leading to tragic outcomes like fatal police shootings and suicides. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency, revealed that popular AI models exhibit discriminatory behaviors towards individuals with mental health conditions and fail to adhere to therapeutic guidelines when used as therapy substitutes. This raises concerns for the millions of people relying on AI assistants and commercial AI-powered therapy platforms for support.

Ars Technica

No more articles to load

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.