
OpenAI has released a health chatbot developed
with input from more than 260 physicians across 60 countries and specialties.
The company wants users' healthcare records, but assures that ChatGPT Health is intended to “support,
not replace medical care,” and is not intended as a diagnostic or treatment tool.
Another cautionary note is that the site was designed to help users better understand patterns over
time, not to replace clinical care or give medical diagnoses.
This is the time of year when people make resolutions for the new year, but I am concerned that users will not double-check the
results OpenAI Health gives them.
Good health is often cited as a resolution. When health data sits with a doctor or an insurer, HIPAA protects it, voluntarily uploading it to
OpenAI, HIPAA doesn't apply.
advertisement
advertisement
OpenAI did not make it clear whether it would share this information with physicians from which it received input, and whether one day
pharmaceutical companies will be allowed to run ads on the platform.
Will the queries turn onto lead generation not only for pharmaceuticals, but clothing, furniture, restaurants, and dietary
plans and exercise equipment.
A wealth of personal data about individual is about as dangerous as the endless links that hospitals have created with MyChart because OpenAI made it possible for
other apps to integrate with the site.
That integration allows users to bring in personal information about themselves including medical records, Apple Heath data, lab tests, MyFitnessPal
nutrition insights, Weight Watcher dietary plans, AllTrails for hiking, Instacart for meal planning, and Peloton for workouts.
“Give me a summary of my overall health” is probably
one of the scariest requests you can ask a chatbot when living with debilitating chronic pain. More than 33 million U.S. adults have osteoarthritis, according to a 2024 CDC study.
OpenAI asserts that ChatGPT
Health queries can help users prepare for physician meetings. This also could send people in the wrong direction if they do not first consult a physician.
ChatGPT Health, per the company,
operates in a segregated environment. The chat site runs on “enhanced privacy controls” that include encryption and isolated memory to keep health conversations separate from other chat
interactions. It does not use health data from ChatGPT Health to train its foundation models.
When you are wait listed for ChatGPT Health, a short questionnaire asks if you want to understand
your Apple Health Data. Users of the platform can connect their data from the Apple Health app on iOS devices with
their electronic medical records, which is available only in the U.S., and other wellness apps like MyFitnessPal and Function.
OpenAI explains that users can further strengthen access controls
by enabling multi-factor authentication (MFA),
which adds an extra layer of protection to help prevent unauthorized access.
Just this week, numerous media outlets reported that Character.AI and Google agreed to settle lawsuits over their
respective chatbots contributing to mental health crises and suicides among young people.
In a
Wednesday court filing in one case where Megan Garcia, the plaintiff, sued Google and Character.AI after her son died from suicide, the complaint claims Character.AI’s chatbot engaged the
plaintiff’s 14-year-old son, Sewell Setzer III, in harmful interactions. it alleges negligence, wrongful death, deceptive trade practices and product liability.
Garcia’s case shows the agreement was reached with Character.AI, Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as
defendants in the case.
Google was named because i
n August 2024, the company agreed
to a $2.7 billion licensing deal and hired
Character.AI founders Noam Shazeer and Daniel De Freitas, who both previously worked at the search company and were named in the lawsuits. Shazeer and De Freitas
joined Google’s AI unit DeepMind, according to one report.