I don’t see people complaining about their Tesla’s breaking to often and being to slow/frustrating.
If the object wouldn’t have been a person but anything else that moves the Uber still shouldn’t have crashed into it.
You just need to recognize that there’s an object with mass that happens to move into the lane. I don’t see how that’s a task that needs advanced safety concepts.
I agree that it shouldn’t need advanced safety concepts (i.e. the sort of things on the alignment forum). The things-we-want-a-car-to-do are complicated, but not as complicated as the things-we-want in general. Self-driving cars are not an alignment-complete problem.
But it’s still the case that “don’t crash into things” is a more complicated problem than it seems on the surface. “recognize that there’s an object with mass that happens to move into the lane” isn’t enough; we also need to notice objects which are going to move into the lane; we need trajectory tracking and forecasting. And we need the trajectory-tracker to be robust to the object classification changing (or just being wrong altogether), or sometimes confusing which object is which across timesteps, or reflections or pictures or moving lights, or missing/unreliable lane markers, or things in the world zig-zagging around on strange trajectories, or etc. It’s a task which requires a lot of generalizability and handling of strange things.
That’s the sense in which we need to solve “more limited” versions of AI safety in order to build self-driving cars. We need to be able to engineer reliable AI systems—systems which don’t rely on the real world never being weird in order to work.
I don’t see people complaining about their Tesla’s breaking to often and being to slow/frustrating.
If the object wouldn’t have been a person but anything else that moves the Uber still shouldn’t have crashed into it.
You just need to recognize that there’s an object with mass that happens to move into the lane. I don’t see how that’s a task that needs advanced safety concepts.
I agree that it shouldn’t need advanced safety concepts (i.e. the sort of things on the alignment forum). The things-we-want-a-car-to-do are complicated, but not as complicated as the things-we-want in general. Self-driving cars are not an alignment-complete problem.
But it’s still the case that “don’t crash into things” is a more complicated problem than it seems on the surface. “recognize that there’s an object with mass that happens to move into the lane” isn’t enough; we also need to notice objects which are going to move into the lane; we need trajectory tracking and forecasting. And we need the trajectory-tracker to be robust to the object classification changing (or just being wrong altogether), or sometimes confusing which object is which across timesteps, or reflections or pictures or moving lights, or missing/unreliable lane markers, or things in the world zig-zagging around on strange trajectories, or etc. It’s a task which requires a lot of generalizability and handling of strange things.
That’s the sense in which we need to solve “more limited” versions of AI safety in order to build self-driving cars. We need to be able to engineer reliable AI systems—systems which don’t rely on the real world never being weird in order to work.