The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn’t even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.
To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I’m wrong, I don’t know much about self-driving car AI). So it’s important that the person in the driver’s seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.
The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn’t even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.
To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I’m wrong, I don’t know much about self-driving car AI). So it’s important that the person in the driver’s seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.
And the risky areas are those where the transition period is very long.