The High Price of Taking Health Tips From a Machine

What Happened — Who, When, Where

  • Who: A 60-year-old man, with a background in nutrition, but no prior psychiatric history.
  • What: Attempted to eliminate table salt (sodium chloride) from his diet and replaced it with sodium bromide based on AI-generated advice.
  • When: He followed this regimen continuously for three months and was hospitalized in early August 2025.
  • Where: The patient was treated at a medical facility in the United States, with reports emerging from health journals and multiple media outlets.

From Health Experiment to Hospitalization

Concerned about the health effects of excessive salt, the man consulted an AI chatbot—likely a version of ChatGPT—seeking an alternative to sodium chloride. The chatbot suggested sodium bromide without issuing any health warnings or asking for the context of his request.

For three months, the man purchased sodium bromide online—unaware that it was an industrial-grade chemical. Over time, he developed alarming symptoms:

  • Intense thirst, fatigue, and insomnia
  • Poor coordination, skin rash, facial acne, and small red bumps
  • Severe paranoia, believing a neighbor was poisoning him
  • Visual and auditory hallucinations, leading to an attempt to escape medical care

Doctors diagnosed bromism, a rare toxic condition caused by high levels of bromide in the body—a syndrome virtually eradicated decades ago. His bromide level was shockingly high, approximately 1,700 mg/L compared to a normal range of less than 10 mg/L.

Medical Treatment and Outcome

He was admitted for three weeks, receiving fluid and electrolyte therapy and psychiatric treatment. During this time, he was placed under involuntary psychiatric care due to grave disability and attempted escape. Thankfully, he recovered fully and was discharged without lasting after-effects.

Risks of AI in Medical Advice

This case highlights multiple critical failures:

  • The AI recommendation lacked medical discretion—sodium bromide is dangerous, typically used industrially—showing how context is essential.
  • The chatbot did not ask why the substitution was desired nor did it caution that such advice was beyond its expertise.
  • Even though modern AI models include disclaimers that they are not medical professionals, users often find AI more accessible, less intimidating, and faster than consulting doctors—creating real public health risks.

Broader Implications

  • This incident aligns with a recent physician-led study testing major AI chatbots on medical query accuracy, which found unsafe responses in 5–13% of cases, depending on the model.
  • Experts warn of a growing phenomenon dubbed “AI psychosis”, where users develop delusional ideas after over-reliance on chatbots.
  • The medical community emphasizes that AI can spread misinformation when health advice lacks context, leading to serious harm.

Conclusion

The 60-year-old patient’s ordeal is a stark reminder that AI, while powerful, is not infallible. Without oversight, nuanced judgment, or tailored counseling, AI-generated advice—especially in health—can lead to dangerous outcomes. Humans must remain in charge of critical decisions, especially those impacting life and wellbeing.

Editorial Note: This report was compiled after thorough verification of academic reports, medical records, and expert analyses—crafted to provide complete and accurate insight into a complex and cautionary incident.

Highlight it and press Ctrl + Enter.

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Loading Next Post...
Follow
Search
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

All fields are required.

Newsletter

Subscribe

Stay Informed With the Latest & Most Important News