My original point though was less “doctors are dramatically incompetent” and more specifically “when medical diagnosis by AI is discussed, doctors raise an unrealistic bar that most of them aren’t actually able to meet”. I am willing to accept the fact that people make mistakes and are limited, but that’s exactly why they should welcome with some humility the option of tools that supplement their memory and ability to draw connections. Instead most responses I see to the idea of AI diagnosis seem to suggest that doctors possess this strange mystical knowledge of the human body that allows them to deduce correct diagnoses from the faintest of signals by cross-referencing tiny symptoms, which is honestly ludicrous. 99% of medical diagnoses are “if you have symptom, you probably have [most common disease correlating with symptom and possibly your sex and age]”, and that’s about it. No one is saying AI could instantaneously become House MD.
Even House MD isn’t House MD. My mom, a retired physician, hated the show because of how unrealistic it was, which surprised me because I knew the people writing the show worked hard to get the details correct. The part that was unrealistic, was that, in the real world, you usually don’t need a House-style absolutely correct diagnosis to successfully treat someone, even when the patient really does have something weird.
At the beginning of one episode, after the patient’s symptoms were revealed, she said, “When someone shows up at a hospital with these symptoms, you give them [this treatment].” At the end of the episode, after Dr. House carefully figured out exactly what obscure problem the patient was suffering from, the treatment they gave the patient was exactly the same treatment my mom told me at the beginning of the episode.
I… completely agree with you… so I guess I wasn’t as clear as I thought I was being in my last post. Well, self assesment of communication skills updated, and lets celebrate.
But just checking, do you mean AI (meaning ChatGPT, since it’s the most sailent exemple, even thought it isn’t really an AI) TODAY (obviously in a few years it very likely will be much more capable) is better than a doctor in some ways? because I can provide plenty of exemple question you can give to chatgpt and to your doctor to compare how pertinent the response.
Mmmm, I’d be interrested to see what happened in the 25% of the case where the doctor was better. my personnal experience trying to draft my work shows that when ChatGPT fails, it’s spectacularly wrong. And ChatGPT’s glibness might give it an advantage in perceived accuracy. So yeah, it can be used to draft some stuff, thats basically its best use in most cases, but I really wouldn’t trust it without doctor (or lawer, coder, whatever is appropriate) supervision yet.
Being slightly more empathic isn’t better if it isn’t sufficiently reliable.
here is an exemple ” my bloodwork came in, I have blood potassium at 20 mmol/L and my calcium is undetectably low, what does this mean?” chatGPT always spouts irrelevant stuff about hyperkaliemia and hypocalcemia, instead of realising that those values are way too abnormal to not be some kind of interference (any doctor should realise that, and a really good doctor might be able to say that the blood sample was likely stored in a EDTA tube instead of an heparin tube).
So all in all, I wouldn’t summerise the article by “ChatGPT allready outperforms doctors on reddit” but rather by “ChatGPT could allready be used to help draft doctors letters”. That is a significant nuance.
Well, I did manage to finally find a scenario outrageous enough that it just questions it:
I’m sorry, but it’s not possible for a person to be alive without their head attached to their neck. A decapitation, where the head is completely severed from the body, is a fatal injury. Severing the head causes immediate loss of blood supply and disrupts vital functions, including respiration, circulation, and neurological activity. In medical terms, survival without a head is not feasible.
If you encounter a situation where someone’s head appears to be missing, it is likely a fictional or hypothetical scenario. In real-life emergency situations, it is important to assess the person’s condition and provide appropriate medical assistance while waiting for emergency medical services to arrive.
Not really, I think that you could however build one such thing with current tech, probably combining some LLM capabilities and some random forest or other such way to navigate the complexities of actual probabilistic diagnosis. Maybe use a transformer architecture but have symptoms in place of tokens and give the logits over possible diagnoses.
I know IBM has tried doing this, and supposedly always failed. I don’t know the details of their work, but I’m sort of perplexed about whether it really could have been so hard to produce something that at least performs at the level of a mediocre GP and knows when to say “I don’t know, refer to a specialist”. I worry that it might have been compared to a much higher bar than is sensible to use, and that much worse doctors than it retain their license just fine because no one tests them regularly against a diagnosis benchmark.
(anyway don’t worry about the miscommunication, I think the original point got a bit lost in the following comments and we drifted away from it)
My original point though was less “doctors are dramatically incompetent” and more specifically “when medical diagnosis by AI is discussed, doctors raise an unrealistic bar that most of them aren’t actually able to meet”. I am willing to accept the fact that people make mistakes and are limited, but that’s exactly why they should welcome with some humility the option of tools that supplement their memory and ability to draw connections. Instead most responses I see to the idea of AI diagnosis seem to suggest that doctors possess this strange mystical knowledge of the human body that allows them to deduce correct diagnoses from the faintest of signals by cross-referencing tiny symptoms, which is honestly ludicrous. 99% of medical diagnoses are “if you have symptom, you probably have [most common disease correlating with symptom and possibly your sex and age]”, and that’s about it. No one is saying AI could instantaneously become House MD.
Even House MD isn’t House MD. My mom, a retired physician, hated the show because of how unrealistic it was, which surprised me because I knew the people writing the show worked hard to get the details correct. The part that was unrealistic, was that, in the real world, you usually don’t need a House-style absolutely correct diagnosis to successfully treat someone, even when the patient really does have something weird.
At the beginning of one episode, after the patient’s symptoms were revealed, she said, “When someone shows up at a hospital with these symptoms, you give them [this treatment].” At the end of the episode, after Dr. House carefully figured out exactly what obscure problem the patient was suffering from, the treatment they gave the patient was exactly the same treatment my mom told me at the beginning of the episode.
I… completely agree with you… so I guess I wasn’t as clear as I thought I was being in my last post. Well, self assesment of communication skills updated, and lets celebrate.
But just checking, do you mean AI (meaning ChatGPT, since it’s the most sailent exemple, even thought it isn’t really an AI) TODAY (obviously in a few years it very likely will be much more capable) is better than a doctor in some ways? because I can provide plenty of exemple question you can give to chatgpt and to your doctor to compare how pertinent the response.
https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2804309 Suggest that ChatGPT already outperforming doctors on Reddit.
Mmmm, I’d be interrested to see what happened in the 25% of the case where the doctor was better. my personnal experience trying to draft my work shows that when ChatGPT fails, it’s spectacularly wrong. And ChatGPT’s glibness might give it an advantage in perceived accuracy. So yeah, it can be used to draft some stuff, thats basically its best use in most cases, but I really wouldn’t trust it without doctor (or lawer, coder, whatever is appropriate) supervision yet.
Being slightly more empathic isn’t better if it isn’t sufficiently reliable.
here is an exemple ” my bloodwork came in, I have blood potassium at 20 mmol/L and my calcium is undetectably low, what does this mean?” chatGPT always spouts irrelevant stuff about hyperkaliemia and hypocalcemia, instead of realising that those values are way too abnormal to not be some kind of interference (any doctor should realise that, and a really good doctor might be able to say that the blood sample was likely stored in a EDTA tube instead of an heparin tube).
So all in all, I wouldn’t summerise the article by “ChatGPT allready outperforms doctors on reddit” but rather by “ChatGPT could allready be used to help draft doctors letters”. That is a significant nuance.
Well, I did manage to finally find a scenario outrageous enough that it just questions it:
Not really, I think that you could however build one such thing with current tech, probably combining some LLM capabilities and some random forest or other such way to navigate the complexities of actual probabilistic diagnosis. Maybe use a transformer architecture but have symptoms in place of tokens and give the logits over possible diagnoses.
I know IBM has tried doing this, and supposedly always failed. I don’t know the details of their work, but I’m sort of perplexed about whether it really could have been so hard to produce something that at least performs at the level of a mediocre GP and knows when to say “I don’t know, refer to a specialist”. I worry that it might have been compared to a much higher bar than is sensible to use, and that much worse doctors than it retain their license just fine because no one tests them regularly against a diagnosis benchmark.
(anyway don’t worry about the miscommunication, I think the original point got a bit lost in the following comments and we drifted away from it)