ChatGPT’s Dangerous Advice Lands Man in Hospital
A 60-year-old man was hospitalized with bromide toxicity after a ChatGPT-generated diet plan went horribly wrong. This shocking incident is a stark warning about the dangers of relying on AI for medical and health advice.

The rise of artificial intelligence has been met with both excitement and concern. AI tools like ChatGPT are hailed for their ability to assist with everything from writing emails to generating code. However, a recent incident involving a 60-year-old man in New York serves as a chilling reminder of the dangers of blindly trusting AI for critical advice, especially in the realm of health. This individual ended up in the hospital with a serious medical condition after following a diet plan devised by the popular chatbot, a case that highlights the inherent limitations and potential risks of AI-generated information. The man’s story is a stark wake-up call, reinforcing the crucial message that AI, for all its power, is not a substitute for professional human expertise.
The Incident: A Diet Plan That Led to Disaster
The man, who was looking to improve his health, decided to use ChatGPT to create a strict diet plan. He asked the chatbot for ways to eliminate table salt (sodium chloride) from his diet. In a surprising and dangerous response, ChatGPT suggested sodium bromide as a salt substitute. Sodium bromide is a compound that is now widely recognized as toxic in large doses, and its use in the past led to health issues. Unaware of the toxicity, the man diligently used this substitute in his cooking for three months.
The consequences were severe. He began experiencing a range of frightening symptoms, including hallucinations, paranoia, and extreme thirst. When his condition worsened, he was rushed to the hospital. Doctors diagnosed him with hyponatremia—a dangerously low level of sodium in his blood—as well as bromide toxicity, which was causing his severe neurological symptoms. He spent three weeks in the hospital, where he received intensive medical care to flush the toxic compounds from his system and restore his sodium and chloride levels to normal.
This case is a classic example of how AI, while brilliant at pattern recognition and information synthesis, lacks the critical judgment and ethical framework of a human professional. It did not have the context to understand that a seemingly simple suggestion could have life-threatening consequences.
The Dangers of Relying on AI for Medical Advice
This incident highlights a critical issue in the age of AI: the unreliability of chatbots for medical advice. Unlike a doctor, an AI model does not understand the nuances of a patient's medical history, age, or specific health conditions. A doctor’s diagnosis is based on years of training, professional experience, and the ability to ask follow-up questions and interpret complex data.
-
Lack of Context: AI models do not understand the full context of a human’s request. What might seem like a simple question about a diet can have serious implications if the answer is not medically sound.
-
Outdated or Inaccurate Data: While AI models are trained on vast datasets, that information can be outdated or even incorrect. In this case, the AI recommended a compound that has long been known to be toxic.
-
Ethical and Legal Gaps: There are currently no clear legal or ethical frameworks governing AI-generated medical advice. If a user is harmed, who is responsible? The user? The AI developer? The lack of clear accountability is a major concern.
OpenAI and other AI companies have a responsibility to include robust disclaimers and safety measures, explicitly warning users against using their models for medical advice. But as this case shows, such warnings may not be enough to prevent a motivated user from seeking and acting upon dangerous information.
Conclusion: The Indispensable Role of Human Expertise
The hospitalization of the 60-year-old man is a powerful and sobering reminder that while AI can be a helpful assistant, it cannot replace human expertise in critical fields like medicine. For all the wonders of AI, it lacks empathy, ethical judgment, and the nuanced understanding that comes with human experience. The message is clear: for any serious health concern, the first and only source of advice should be a qualified medical professional. This tragic incident underscores the importance of a human-centric approach to healthcare and serves as a vital lesson in responsible technology use for all.