Similarly, the problem with Uber’s car was that if you have an automatic driving system that can’t recognize pedestrians, can’t anticipate the movements of jaywalkers, freezes in response to dangerous situations, and won’t brake to mitigate collisions, it is absolutely nowhere near ready to guide a car on public roads.
Isn’t the problem that the human driver wasn’t paying attention? My car also cannot recognize pedestrians etc, but it’s fine to allow it on public roads because I am vigilant.
To the extent that Uber is at fault (rather than their employee), it seems to me that it’s not that they let their cars on the road before they were advanced enough; it’s that they didn’t adequately ensure that their drivers would be vigilant (via eye-tracking, training, or etc.).
The NTSB report was released last week, showing that Uber’s engineering was doing some things very wrong (with specifics that had not been reported before). Self-driving programs shouldn’t go on public roads with that kind of system, even with a driver ready to take over.
I’ve seen that. Maybe I’m missing something, but I still stand by my comment. My car is even less capable than the vehicles described there and it’s fine to drive.
Seems like the only reason that my car should be allowed on the roads but these should not be is some kind of expectation of moral hazard or false confidence on the part of the driver. No?
Perhaps one could argue that the car is in an uncanny valley where no one can be taught to monitor it correctly. But then it seems like that should be the emphasis rather than simply that the car was not good enough yet at driving itself.
Humans are known to be extremely bad at this kind of task (passively watching something for hours while remaining ready to respond to danger within a few seconds) and Uber should have known this. If Uber wanted to go ahead with this bad strategy anyway, it should have screened its employees to make sure they were capable of the task they were given.
I don’t think anyone is capable of it. A system that depends on passive vigilance and instant response from a human is broken from the start. Selection and training will not change this. You cannot select for what does not exist, nor train for what cannot be done. There’s a gap that has to be crossed between involving the human at all times and involving the human not at all.
For those who haven’t seen it, starting at second 15 here, the driver can be seen to be looking down (presumably at their phone), for 6 full seconds before looking up and realizing that they’re about to hit someone. This would not be safe to do in any car.
The step before this was probably having a safety driver in the car who isn’t expected to take over immediately, but can do things like move the car to the side of the road after an emergency stop. In that case the person in the driver’s seat spending most of their time reading their phone would safe.
Isn’t the problem that the human driver wasn’t paying attention? My car also cannot recognize pedestrians etc, but it’s fine to allow it on public roads because I am vigilant.
To the extent that Uber is at fault (rather than their employee), it seems to me that it’s not that they let their cars on the road before they were advanced enough; it’s that they didn’t adequately ensure that their drivers would be vigilant (via eye-tracking, training, or etc.).
The NTSB report was released last week, showing that Uber’s engineering was doing some things very wrong (with specifics that had not been reported before). Self-driving programs shouldn’t go on public roads with that kind of system, even with a driver ready to take over.
I’ve seen that. Maybe I’m missing something, but I still stand by my comment. My car is even less capable than the vehicles described there and it’s fine to drive.
Seems like the only reason that my car should be allowed on the roads but these should not be is some kind of expectation of moral hazard or false confidence on the part of the driver. No?
Perhaps one could argue that the car is in an uncanny valley where no one can be taught to monitor it correctly. But then it seems like that should be the emphasis rather than simply that the car was not good enough yet at driving itself.
Humans are known to be extremely bad at this kind of task (passively watching something for hours while remaining ready to respond to danger within a few seconds) and Uber should have known this. If Uber wanted to go ahead with this bad strategy anyway, it should have screened its employees to make sure they were capable of the task they were given.
I don’t think anyone is capable of it. A system that depends on passive vigilance and instant response from a human is broken from the start. Selection and training will not change this. You cannot select for what does not exist, nor train for what cannot be done. There’s a gap that has to be crossed between involving the human at all times and involving the human not at all.
For those who haven’t seen it, starting at second 15 here, the driver can be seen to be looking down (presumably at their phone), for 6 full seconds before looking up and realizing that they’re about to hit someone. This would not be safe to do in any car.
There are now actual driverless cars in Phoenix that you can hail. If they get into an emergency situation they need to resolve it entirely on their own because there isn’t time to bring anyone else in.
The step before this was probably having a safety driver in the car who isn’t expected to take over immediately, but can do things like move the car to the side of the road after an emergency stop. In that case the person in the driver’s seat spending most of their time reading their phone would safe.