Link: Giving your healthcare info to a chatbot is, unsurprisingly, a terrible idea
Over 230 million people weekly seek health advice from ChatGPT, with OpenAI urging users to share personal health data. The AI is designed to assist with navigating health insurance and advocating for oneself.
OpenAI has launched ChatGPT Health, claiming a secure setup for storing health data, which will not be used for training AI models. A separate enterprise product, ChatGPT for Healthcare, focuses on providing services directly to healthcare providers, with stricter data protection measures.
Although OpenAI promises security, the protections for ChatGPT Health users are primarily based on company policy, not stringent legal obligations. Legal experts highlight the limitations of relying on company assurances for data safety, as terms can change.
While ChatGPT aids in understanding and organizing medical information, it's not recognized as a medical device by regulatory authorities. This categorization largely exempts it from rigorous safety and efficacy checks.
The burgeoning use of AI in healthcare poses serious questions about accuracy and trust. Instances of AI providing faulty medical advice spotlight the risks of relying heavily on such technology.
As AI firms increasingly influence the healthcare sector, their ability to manage sensitive data responsibly and provide reliable information remains under scrutiny. Trust in these tools is crucial, given their growing role in health management.
#
--
Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.
Member discussion