Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
This means it doesn’t say “I diagnose this patient with X”. It says “Here is a list of conditions along with their probabilities”. It also doesn’t say “No diagnosis found”—it says “Here’s a list of conditions along with their probabilities, it’s just that the top 20 conditions all have probabilities between 2% and 6%”.
It also says things like “The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C”.
A doctor might ask it “What about disease Y?” and the expert system will answer “It’s probability is such-and-such, it’s not zero because of symptoms Q and P, but it’s not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C.”
And there probably would be button which says “Explain” and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like “What happens if we change these coughs to hiccups?”
An intelligently designed expert system often does not replace the specialist—it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
Us? I’m a mechanical engineer. I haven’t even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease—and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what’s going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
This means it doesn’t say “I diagnose this patient with X”. It says “Here is a list of conditions along with their probabilities”. It also doesn’t say “No diagnosis found”—it says “Here’s a list of conditions along with their probabilities, it’s just that the top 20 conditions all have probabilities between 2% and 6%”.
It also says things like “The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C”.
A doctor might ask it “What about disease Y?” and the expert system will answer “It’s probability is such-and-such, it’s not zero because of symptoms Q and P, but it’s not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C.”
And there probably would be button which says “Explain” and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like “What happens if we change these coughs to hiccups?”
An intelligently designed expert system often does not replace the specialist—it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Us? I’m a mechanical engineer. I haven’t even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease—and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what’s going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.