When AI threatens a profession, will it just be outlawed? Medicine is already a form of guild socialism, and it seems unlikely that AI will be allowed in, once its existential threat to those in the guild is perceived. Moreover, most institutional doctors today do little more than see too many patients too fast while doing rote guideline assembly line medicine—AI can do that just fine and better, it seems.
BTW, most studies are complete nonsense.
AI chatbot ChatGPT’s responses to medical queries scored higher ratings compared to human responses, according to a new study. However, researchers raised concerns that the mechanization of such activities could take away the feeling of human support.
The study, published in the JAMA Network journal on April 28, involved researchers randomly selecting 195 questions from a public social media health forum to which a verified physician had responded. ChatGPT was fed with these questions and the AI generated its responses. Researchers submitted the original questions as well as randomly ordered responses from the verified physician and ChatGPT to a team of three licensed healthcare professionals. The trio compared the responses based on “the quality of information provided” and “the empathy or bedside manner provided.”
The three healthcare professionals preferred the responses of ChatGPT to those of the verified physician 78.6 percent of the time.
...The study’s main limitation is the fact that it used an online forum question-and-answer exchange. Such messages might not reflect typical patient-physician questions, the study admitted.
In an April 5 article published on Medium, Dr. Josh Tamayo-Sarver revealed the issues he discovered when he used ChatGPT to diagnose his patients. “The results were fascinating, but also fairly disturbing,” he writes.
As long as the material fed to ChatGPT was precise and highly detailed, the bot did a “decent job” of highlighting common diagnoses. For almost half the patients, ChatGPT suggested six possible diagnoses. The right diagnoses of the patients were one among these six diagnoses outputted by the bot.
However, “a 50 percent success rate in the context of an emergency room is also not good,” Josh says. In one case, ChatGPT missed diagnosing that a 21-year-old female patient had an ectopic pregnancy, a condition in which the fetus develops in a woman’s fallopian tube rather than the uterus. If diagnosed late, the situation can lead to death.
WIND: given that medical errors are terrifyingly common, the obvious question should be asked: how does ChatGPT error rate compare to practicing physicians?
It is my view that ChatGPRT does not think or have any actual intelligence; it is more of a mimicry and mining technology. Nor can it examine a patient in any way, but between images and scanes, that could be mostly addressed. But if it can zero-in on likely causes, saving doctors time and also acting as a cross-check, that could be extremely valuable.
Has ChatGPT been trained with Sickening and Overdiagnosed and other books? That would make it even more useful. But if it simply rehashes the fraudulent mainstream medical “science”* and dogma, not so useful.
* Peer review is a fraudulent farce for numerous reasons, but starting with (almost always) no access to the data even by the authors of the study, let alone the reviewers, let alone the medical community at large.