This isn’t a progress vs luddite debate—the fact that the human element of a automation+overseer performs worse than if the human were entirely in charge, is not a general argument against automation (at most, it might be an argument against replacing a human with an automation+overseer model if the gains are expected to be small).
The fact that humans can exercise other skills (pilots apparently do a lot when the autopilot is engaged) does not negate the fact they lose skills when it comes to taking over from the automation.
The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn’t even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.
To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I’m wrong, I don’t know much about self-driving car AI). So it’s important that the person in the driver’s seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.
This isn’t a progress vs luddite debate—the fact that the human element of a automation+overseer performs worse than if the human were entirely in charge, is not a general argument against automation (at most, it might be an argument against replacing a human with an automation+overseer model if the gains are expected to be small).
The fact that humans can exercise other skills (pilots apparently do a lot when the autopilot is engaged) does not negate the fact they lose skills when it comes to taking over from the automation.
The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn’t even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.
To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I’m wrong, I don’t know much about self-driving car AI). So it’s important that the person in the driver’s seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.
And the risky areas are those where the transition period is very long.