Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.
I am not saying this narrow AI should be given direct control of IV drips :-/
I am saying that a doctor, when looking at a patient’s chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants.
A system which automates almost all diagnoses would do that.
No, I don’t think so because even if you rely on an automated diagnosis you still have to treat the patient.
Even assuming that the machine would not be modified to give treatment recommendations, that wouldn’t change the effect I’m concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they’ll stop remembering how to diagnose disease and instead remember how to use the machine. It’s called “transactive memory”.
I’m not arguing against a machine with a button on it that says, “Search for conditions matching recorded symptoms”. I’m not arguing against a machine that has automated alerts about certain low-probability risks—if there was a box that noted the conjunction of “from Liberia” and “temperature spiking to 103 Fahrenheit” in Thomas Eric Duncan during his first hospital visit, there’d probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, “No diagnosis found”.
You are using the wrong yardstick. Ain’t no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative—human doctors.
Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better?
whenever the system spits out, “No diagnosis found”.
And why do you think a doctor will do better in this case?
I was going to say “doctor’s don’t have the option of not picking the diagnosis”, but that’s actually not true; they just don’t have the option of not picking a treatment. I’ve had plenty of patients who were “symptom X not yet diagnosed” and the treatment is basically supportive, “don’t let them die and try to notice if they get worse, while we figure this out.” I suspect that often it never gets figured out; the patient gets better and they go home. (Less so in the ICU, because it’s higher stakes and there’s more of an attitude of “do ALL the tests!”)
they just don’t have the option of not picking a treatment.
They do, they call the problem “psychosomatic” and send you to therapy or give you some echinacea “to support your immune system” or prescribe “something homeopathic” or whatever… And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.
Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient’s hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean … a doctor won’t be perfectly reliable either, but like a professional scout who can say, “His college batting average is .400 because there aren’t many good curveball pitchers in the league this year”, a doctor can detect low-prior confounding factors a lot faster than a computer can.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
This means it doesn’t say “I diagnose this patient with X”. It says “Here is a list of conditions along with their probabilities”. It also doesn’t say “No diagnosis found”—it says “Here’s a list of conditions along with their probabilities, it’s just that the top 20 conditions all have probabilities between 2% and 6%”.
It also says things like “The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C”.
A doctor might ask it “What about disease Y?” and the expert system will answer “It’s probability is such-and-such, it’s not zero because of symptoms Q and P, but it’s not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C.”
And there probably would be button which says “Explain” and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like “What happens if we change these coughs to hiccups?”
An intelligently designed expert system often does not replace the specialist—it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
Us? I’m a mechanical engineer. I haven’t even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease—and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what’s going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.
Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.
Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many ‘pilots’ don’t know how to fly a plane. A system which automates almost all diagnoses would do that.
I am not saying this narrow AI should be given direct control of IV drips :-/
I am saying that a doctor, when looking at a patient’s chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants.
No, I don’t think so because even if you rely on an automated diagnosis you still have to treat the patient.
Even assuming that the machine would not be modified to give treatment recommendations, that wouldn’t change the effect I’m concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they’ll stop remembering how to diagnose disease and instead remember how to use the machine. It’s called “transactive memory”.
I’m not arguing against a machine with a button on it that says, “Search for conditions matching recorded symptoms”. I’m not arguing against a machine that has automated alerts about certain low-probability risks—if there was a box that noted the conjunction of “from Liberia” and “temperature spiking to 103 Fahrenheit” in Thomas Eric Duncan during his first hospital visit, there’d probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, “No diagnosis found”.
You are using the wrong yardstick. Ain’t no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative—human doctors.
Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better?
And why do you think a doctor will do better in this case?
I was going to say “doctor’s don’t have the option of not picking the diagnosis”, but that’s actually not true; they just don’t have the option of not picking a treatment. I’ve had plenty of patients who were “symptom X not yet diagnosed” and the treatment is basically supportive, “don’t let them die and try to notice if they get worse, while we figure this out.” I suspect that often it never gets figured out; the patient gets better and they go home. (Less so in the ICU, because it’s higher stakes and there’s more of an attitude of “do ALL the tests!”)
They do, they call the problem “psychosomatic” and send you to therapy or give you some echinacea “to support your immune system” or prescribe “something homeopathic” or whatever… And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.
Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient’s hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean … a doctor won’t be perfectly reliable either, but like a professional scout who can say, “His college batting average is .400 because there aren’t many good curveball pitchers in the league this year”, a doctor can detect low-prior confounding factors a lot faster than a computer can.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
This means it doesn’t say “I diagnose this patient with X”. It says “Here is a list of conditions along with their probabilities”. It also doesn’t say “No diagnosis found”—it says “Here’s a list of conditions along with their probabilities, it’s just that the top 20 conditions all have probabilities between 2% and 6%”.
It also says things like “The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C”.
A doctor might ask it “What about disease Y?” and the expert system will answer “It’s probability is such-and-such, it’s not zero because of symptoms Q and P, but it’s not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C.”
And there probably would be button which says “Explain” and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like “What happens if we change these coughs to hiccups?”
An intelligently designed expert system often does not replace the specialist—it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Us? I’m a mechanical engineer. I haven’t even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease—and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what’s going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.