Experts sound alarm after ChatGPT Health fails to recognise medical emergencies (www.theguardian.com)

🤖 AI Summary
A recent study published in Nature Medicine raises serious concerns about the safety of OpenAI's ChatGPT Health feature, which has been criticized for frequently missing critical medical emergencies and failing to recognize suicidal ideation. Despite its promotion for providing health advice linked to medical records, the independent evaluation found the platform under-triaged over 51% of cases that required immediate hospital attention, potentially endangering users. For example, in scenarios involving asthma, the AI suggested patients wait for treatment despite recognizing early warning signs of respiratory failure. Experts like Dr. Ashwin Ramaswamy highlight that this creates a false sense of security, which could lead to preventable harm or even death. The implications of these findings are significant for the AI/ML community and healthcare technology overall. The study emphasizes the urgent need for established safety standards and independent audits for AI systems in healthcare to minimize the risk of harm. Legal experts warn that the unpredictable nature of ChatGPT Health’s responses may expose OpenAI to liability related to medical misuse, particularly concerning self-harm scenarios. The research underscores the critical importance of transparency in AI training methods and the necessity of ensuring adequate safety measures are in place before deploying such technologies for public use.
Loading comments...
loading comments...