I’ve seen that. Maybe I’m missing something, but I still stand by my comment. My car is even less capable than the vehicles described there and it’s fine to drive.
Seems like the only reason that my car should be allowed on the roads but these should not be is some kind of expectation of moral hazard or false confidence on the part of the driver. No?
Perhaps one could argue that the car is in an uncanny valley where no one can be taught to monitor it correctly. But then it seems like that should be the emphasis rather than simply that the car was not good enough yet at driving itself.
Humans are known to be extremely bad at this kind of task (passively watching something for hours while remaining ready to respond to danger within a few seconds) and Uber should have known this. If Uber wanted to go ahead with this bad strategy anyway, it should have screened its employees to make sure they were capable of the task they were given.
I don’t think anyone is capable of it. A system that depends on passive vigilance and instant response from a human is broken from the start. Selection and training will not change this. You cannot select for what does not exist, nor train for what cannot be done. There’s a gap that has to be crossed between involving the human at all times and involving the human not at all.
I’ve seen that. Maybe I’m missing something, but I still stand by my comment. My car is even less capable than the vehicles described there and it’s fine to drive.
Seems like the only reason that my car should be allowed on the roads but these should not be is some kind of expectation of moral hazard or false confidence on the part of the driver. No?
Perhaps one could argue that the car is in an uncanny valley where no one can be taught to monitor it correctly. But then it seems like that should be the emphasis rather than simply that the car was not good enough yet at driving itself.
Humans are known to be extremely bad at this kind of task (passively watching something for hours while remaining ready to respond to danger within a few seconds) and Uber should have known this. If Uber wanted to go ahead with this bad strategy anyway, it should have screened its employees to make sure they were capable of the task they were given.
I don’t think anyone is capable of it. A system that depends on passive vigilance and instant response from a human is broken from the start. Selection and training will not change this. You cannot select for what does not exist, nor train for what cannot be done. There’s a gap that has to be crossed between involving the human at all times and involving the human not at all.