
What Happened — Who, When, Where
From Health Experiment to Hospitalization
Concerned about the health effects of excessive salt, the man consulted an AI chatbot—likely a version of ChatGPT—seeking an alternative to sodium chloride. The chatbot suggested sodium bromide without issuing any health warnings or asking for the context of his request.
For three months, the man purchased sodium bromide online—unaware that it was an industrial-grade chemical. Over time, he developed alarming symptoms:
Doctors diagnosed bromism, a rare toxic condition caused by high levels of bromide in the body—a syndrome virtually eradicated decades ago. His bromide level was shockingly high, approximately 1,700 mg/L compared to a normal range of less than 10 mg/L.
Medical Treatment and Outcome
He was admitted for three weeks, receiving fluid and electrolyte therapy and psychiatric treatment. During this time, he was placed under involuntary psychiatric care due to grave disability and attempted escape. Thankfully, he recovered fully and was discharged without lasting after-effects.
Risks of AI in Medical Advice
This case highlights multiple critical failures:
Broader Implications
Conclusion
The 60-year-old patient’s ordeal is a stark reminder that AI, while powerful, is not infallible. Without oversight, nuanced judgment, or tailored counseling, AI-generated advice—especially in health—can lead to dangerous outcomes. Humans must remain in charge of critical decisions, especially those impacting life and wellbeing.
Editorial Note: This report was compiled after thorough verification of academic reports, medical records, and expert analyses—crafted to provide complete and accurate insight into a complex and cautionary incident.