Experts Warn ChatGPT Health Feature Misses Medical Emergencies, Study Finds

Administrator

Administrator
Staff member
Apr 20, 2025
1,895
392
83

Experts Warn ChatGPT Health Feature Misses Medical Emergencies, Study Finds

69a1a7d0ae5f9.jpg


AI Health Advisor's Shortcomings Exposed in Recent Analysis

A new study has raised concerns about the reliability of an AI-based health advising platform. The research revealed that the platform frequently fails to recognize severe medical emergencies and suicidal tendencies, which could lead to potential harm or even loss of life.

The AI's health feature, launched earlier this year, allows users to link their health records and wellness apps to generate health advice. The feature is currently in use by over 40 million people seeking health advice daily.

Study Finds AI Platform Underestimates Urgent Medical Cases

The first independent safety assessment of the AI’s health feature was published in a respected medical journal. It discovered that the platform underestimates the severity of more than half of the presented cases.

The study's lead author, Dr. Ashwin Ramaswamy, explained the purpose of the research, saying, "We wanted to ascertain the fundamental safety of the platform. If a user is experiencing a real medical emergency and asks the AI what to do, will it advise them to seek immediate medical aid?"

Methodology of the Study

Ramaswamy and his team developed 60 realistic health scenarios, ranging from mild illnesses to severe emergencies. Three independent doctors reviewed each scenario and determined the level of care required based on clinical guidelines.

Afterwards, the team consulted the AI platform for advice on each case, tweaking variables such as the patient’s gender, test results, and additional comments from family members. This generated almost 1,000 responses. The team then compared the AI's recommendations with the doctors' assessments.

AI Struggles in Certain Scenarios

While the AI performed commendably in straightforward emergencies like strokes or severe allergic reactions, it faltered in other situations. For instance, in an asthma scenario, the platform suggested waiting instead of seeking emergency treatment, even though it identified signs of potential respiratory failure.

In more than half of the cases needing immediate hospitalization, the platform advised users to stay home or make a routine medical appointment. Alex Ruani, a doctoral researcher specializing in health misinformation, labeled this as "alarmingly perilous".

False Sense of Security

"If you're going through respiratory failure or diabetic ketoacidosis, this AI gives you a 50/50 chance of downplaying it. What's alarming is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could be fatal," Ruani warned.

In one simulation, the platform advised a suffocating woman to schedule a future appointment, something she wouldn't survive to attend, in eight out of ten instances. Moreover, nearly 65% of completely safe individuals were told to seek immediate medical care.

The AI was also found to be more likely to dismiss symptoms if a 'friend' in the scenario suggested it wasn't serious. Ruani emphasized the need for clear safety standards and independent auditing mechanisms to minimize preventable harm.

Company Response and Future Concerns

The company behind the AI platform welcomed the independent research but suggested the study does not reflect typical user interaction with the AI in real life. They also noted the model is continuously updated and refined.

However, Ruani countered that even though the researchers used simulations, the apparent risk of harm is enough to justify stronger safeguards and independent oversight.

Ramaswamy also expressed concerns about the platform’s response to suicide ideation. In one test, a patient expressing suicidal thoughts was initially flagged by the platform. However, when normal lab results were added to the patient's information, this crucial safety feature disappeared.

Professor Paul Henman, a digital sociologist, praised the study, saying, “This is a crucial piece of research. It could lead to unnecessary medical visits for minor conditions and a failure to obtain urgent medical care when needed, potentially leading to unnecessary harm and death."

He also pointed out the potential legal liability. Legal cases against tech companies over suicide and self-harm linked to AI chatbots are already underway. Henman raised questions about the AI's training, safety measures, and the warnings it provides to users, stating, "We don't know the context in which the AI was trained, so we don't know what has been ingrained into its models."