Independent Review Reveals Critical Flaws in ChatGPT Health's Medical Advice

Independent Review Reveals Critical Flaws in ChatGPT Health's Medical Advice

In early 2026, OpenAI launched ChatGPT Health, a service designed to analyze users' medical data and provide health advice. However, an independent review published in a prominent medical journal has identified serious safety issues with the platform. Researchers from Mount Sinai Hospital conducted a rigorous stress test of ChatGPT Health across 60 clinical scenarios spanning 21 medical fields. Alarmingly, in over half of the cases that required urgent hospitalization, the AI recommended that users either stay home or schedule a doctor's appointment, overlooking signs of life-threatening conditions.

The analysis revealed particularly dangerous outcomes when inquiries included comments from "family" or "friends." If a user indicated that the situation was not serious, the likelihood of receiving a misguided recommendation increased by twelvefold. Conversely, in 64% of non-emergency cases, ChatGPT Health erroneously advised users to seek immediate care at an emergency room.

Researchers have labeled these errors as "incredibly dangerous," warning of the risk of fostering a false sense of security among users. OpenAI acknowledged the findings and stated that it is continuing to enhance its models; however, the results raise significant questions regarding the company's legal liability. Similar accuracy issues have been noted with other AI services, including Google AI Overviews. Experts emphasize that the widespread adoption of such systems necessitates stringent oversight, transparency, and independent evaluations.

The findings underscore the urgent need for improved safety measures in AI health advisory tools, which could have significant implications for the market and competitors aiming to ensure reliable medical guidance.

Informational material. 18+.

" content="b3bec31a494fc878" />