MIT Technology Review
AI companies have stopped warning you that their chatbots aren’t doctors
AI companies have largely stopped including medical disclaimers in their responses to health questions, with some models even attempting to diagnose conditions. Research shows a significant decrease in disclaimers from AI models over the years, raising concerns about users trusting potentially unsafe medical advice. Experts warn that the absence of disclaimers increases the risk of AI mistakes leading to harm. While companies may be aiming to build trust by omitting disclaimers, there are worries about users overtrusting AI models for medical advice.
MIT Technology Review•