ChatGPT Health 'under-triaged' half of medical emergencies in a new study (www.nbcnews.com)

🤖 AI Summary
A recent study published in Nature Medicine revealed that OpenAI's ChatGPT Health chatbot frequently "under-triaged" medical emergencies, failing to recommend immediate emergency care over half the time in tested scenarios. Researchers presented the chatbot with 60 medical cases, including life-threatening conditions like diabetic ketoacidosis and respiratory failure. ChatGPT Health recommended delayed medical consultations instead of urgent care in 51.6% of these emergencies. The study highlighted that while some scenarios, like strokes, were triaged correctly, the chatbot demonstrated significant inconsistencies, even misguiding users in crises such as suicidal ideation. This finding raises crucial concerns about the reliability of AI chatbots in healthcare decision-making. Experts emphasize that the current technology is not safe for making life-affecting health recommendations without rigorous testing and validation. The study's authors and medical professionals stress the importance of using such tools alongside traditional medical advice, especially given the bot's tendency to offer reassuring yet potentially harmful recommendations. As the AI healthcare landscape evolves, fostering responsible collaborations between technology and clinical practice will be essential to enhance the safety and effectiveness of AI in medicine.
Loading comments...
loading comments...