Mmmm, I’d be interrested to see what happened in the 25% of the case where the doctor was better. my personnal experience trying to draft my work shows that when ChatGPT fails, it’s spectacularly wrong. And ChatGPT’s glibness might give it an advantage in perceived accuracy. So yeah, it can be used to draft some stuff, thats basically its best use in most cases, but I really wouldn’t trust it without doctor (or lawer, coder, whatever is appropriate) supervision yet.
Being slightly more empathic isn’t better if it isn’t sufficiently reliable.
here is an exemple ” my bloodwork came in, I have blood potassium at 20 mmol/L and my calcium is undetectably low, what does this mean?” chatGPT always spouts irrelevant stuff about hyperkaliemia and hypocalcemia, instead of realising that those values are way too abnormal to not be some kind of interference (any doctor should realise that, and a really good doctor might be able to say that the blood sample was likely stored in a EDTA tube instead of an heparin tube).
So all in all, I wouldn’t summerise the article by “ChatGPT allready outperforms doctors on reddit” but rather by “ChatGPT could allready be used to help draft doctors letters”. That is a significant nuance.
Well, I did manage to finally find a scenario outrageous enough that it just questions it:
I’m sorry, but it’s not possible for a person to be alive without their head attached to their neck. A decapitation, where the head is completely severed from the body, is a fatal injury. Severing the head causes immediate loss of blood supply and disrupts vital functions, including respiration, circulation, and neurological activity. In medical terms, survival without a head is not feasible.
If you encounter a situation where someone’s head appears to be missing, it is likely a fictional or hypothetical scenario. In real-life emergency situations, it is important to assess the person’s condition and provide appropriate medical assistance while waiting for emergency medical services to arrive.
Mmmm, I’d be interrested to see what happened in the 25% of the case where the doctor was better. my personnal experience trying to draft my work shows that when ChatGPT fails, it’s spectacularly wrong. And ChatGPT’s glibness might give it an advantage in perceived accuracy. So yeah, it can be used to draft some stuff, thats basically its best use in most cases, but I really wouldn’t trust it without doctor (or lawer, coder, whatever is appropriate) supervision yet.
Being slightly more empathic isn’t better if it isn’t sufficiently reliable.
here is an exemple ” my bloodwork came in, I have blood potassium at 20 mmol/L and my calcium is undetectably low, what does this mean?” chatGPT always spouts irrelevant stuff about hyperkaliemia and hypocalcemia, instead of realising that those values are way too abnormal to not be some kind of interference (any doctor should realise that, and a really good doctor might be able to say that the blood sample was likely stored in a EDTA tube instead of an heparin tube).
So all in all, I wouldn’t summerise the article by “ChatGPT allready outperforms doctors on reddit” but rather by “ChatGPT could allready be used to help draft doctors letters”. That is a significant nuance.
Well, I did manage to finally find a scenario outrageous enough that it just questions it: