More people are using AI tools like ChatGPT for health advice, including homeopathic remedy suggestions. That raises an important question: how closely do AI-generated recommendations align with the judgment of a trained homeopath?
In our latest homeopathy research study, we examined this directly by comparing recommendations from four public AI chatbots with the initial prescriptions made by practitioners across 100 acute cases.
This publication builds on our first AI homeopathy research study.
What we found
The results showed limited agreement between AI chatbot recommendations and practitioner prescribing.
The practitioner’s initial remedy appeared among AI suggestions in 36.5% of cases on average, and as the top recommendation in 20.8% of cases. All four chatbots matched the practitioner’s initial recommendation in only 6% of cases. In 10% of cases, all four chatbots agreed on a remedy that did not match the practitioner’s recommendation.
We also found that AI outputs were not consistent. Different platforms gave different answers to the same case, and the same chatbot could return different recommendations when asked more than once. Medical disclaimers varied as well, appearing to depend more on the platform than on the seriousness of the complaint.
Why this matters
For people using ChatGPT or other AI tools for homeopathy, these findings suggest that chatbot recommendations are not reliably comparable to practitioner judgment.
AI can generate remedy suggestions. It does not take a case or apply clinical reasoning in a consistent way.
Read the full study here.
Study details
Authors: Rachael Doherty, Parker Pracjek, Christine D. Luketic, Denise Straiges, and Alastair C. Gray
Journal: Healthcare
Citation: Healthcare 2026, 14(7), 909
DOI: 10.3390/healthcare14070909
Published: April 1, 2026






