One argument I’ve had for self-driving being hard is: humans drive many millions of miles before they get in fatal accidents. In this long tail, would it be that surprising if there were AGI complete problems within it? My understanding is Waymo and Cruise both use teleoperation in these cases. And one could imagine automating this, a God advising the ant in your analogy. But still, at that point you’re just doing AGI research.
Driving optimally might be AGI complete, but you don’t necessarily need to drive optimally, it should be sufficient to beat typical human drivers for safety (this will depend on the regulatory regime of course).
It might be that the occurrences where avoiding an accident is AGI complete are lower per mile than the cases where typical human drivers make dumb mistakes due to lack of attentiveness and worse sensors.
One argument I’ve had for self-driving being hard is: humans drive many millions of miles before they get in fatal accidents. In this long tail, would it be that surprising if there were AGI complete problems within it? My understanding is Waymo and Cruise both use teleoperation in these cases. And one could imagine automating this, a God advising the ant in your analogy. But still, at that point you’re just doing AGI research.
Driving optimally might be AGI complete, but you don’t necessarily need to drive optimally, it should be sufficient to beat typical human drivers for safety (this will depend on the regulatory regime of course).
It might be that the occurrences where avoiding an accident is AGI complete are lower per mile than the cases where typical human drivers make dumb mistakes due to lack of attentiveness and worse sensors.