Each week, more than 230 million individuals share their medical information, including diagnoses, lab results, and medication lists, with ChatGPT in search of health advice, according to the developer OpenAI. Recently, the company introduced a dedicated ChatGPT Health section to facilitate these interactions. However, experts caution that unlike healthcare providers, this chatbot is not bound by strict medical data protection laws in the countries where it operates. Consequently, the confidentiality of user data relies solely on OpenAI's policy promises, which can be modified at any time. Legal experts warn that should any data breaches occur or if the information is misused, users will have limited legal recourse. Despite these concerns, OpenAI markets its AI as a "health assistant," which boosts user trust, even as there are warnings that the service is not intended for diagnosing medical conditions. This raises important questions about whether such tools should be regulated as medical devices due to their significant impact. The implications of this trend suggest increased scrutiny on data privacy and regulatory measures, which could reshape how competitors approach user data in AI-driven health applications.
Informational material. 18+.