But so what? People are not safe; they have slower reaction time than machines, especially when intoxicated.
To clarify, what I intend to claim is that self-driving cars will not be able to achieve safety comparable to (sober) humans without correctly handling unrecognized objects and other unusual situations. Weird stuff occurs often enough in the real world that handling it will be necessary even for a human-like level of safety.
For more than a year now, Tesla has been releasing Autopilot safety numbers to show that autopilot is safer than a human driver in average driving conditions. [...] Autopilot is primarily used on highways, which have fewer accidents than surface streets because driving conditions are much simpler.
(emphasis added).
The evidence is: if it were safer in conditions other than the easiest possible conditions, then Tesla would be shouting that fact from the rooftops. Instead, they’re advertising only very limited data about how safe they are in the easy case.
More generally, if unusual situations are the main barrier to full self-driving cars, then we’d expect to see lots of automatic safety features for handling easy cases—automatic braking, cruise control, etc. We do see that, and there’s plenty of evidence that they work great for the easy cases they’re designed for. Tesla’s autopilot is an example of that. But that doesn’t give us self-driving cars; it doesn’t let us take the human out of the loop entirely, and taking the human out of the loop is where the large majority of the value is.
Now, if Tesla (or any self-driving car group) published data showing that autopilot is safer than (sober) humans even in the conditions where most accidents occur, then that would be the sort of thing which would let us take the human out of the loop. That’s the kind of safety we need to actually get the majority of the value from self-driving. I do not see evidence of that, and in this case absence of evidence is pretty strong evidence of absence—because there are companies/groups who would want to share that evidence if they had it.
Elon Musk said a while ago that a fair standard for allowing self-driving cars would be for them to be 10x safer.
Publishing a study that says that Tesla autopilot 10% safer then regular driving wouldn’t be very valuable and there’s huge measurement uncertainty when you have to define what “conditions where most accidents occur” mean.
I would expect us to get that kind of data only once there’s a crash and the automaker wants to convince a jury that the car shouldn’t be blamed.
What makes driving on surface streets so much different than driving on highways such that current state of the art ML techniques wouldn’t be able to handle it with slightly more data and compute?
Unlike natural language processing, AI doctors or household robots, driving seems like a very limited non-AGI-complete task to me because a self-driving car never truly interacts with humans or objects beyond avoiding hitting them.
we also need to notice objects which are going to move into the lane; we need trajectory tracking and forecasting. And we need the trajectory-tracker to be robust to the object classification changing (or just being wrong altogether), or sometimes confusing which object is which across timesteps, or reflections or pictures or moving lights, or missing/unreliable lane markers, or things in the world zig-zagging around on strange trajectories, or etc.
I would claim all of the above are also required for driving on the highway.
This is secondhand, but… two years ago I worked with a guy who had been on Tesla’s autopilot team. From the sound of it, they stayed in the lane mainly via some hand-coded image processing which looked for a yellow/white strip surrounded by darker color. For most highway driving, that turned out to be good enough.
I’m not sure how much state-of-the-art ML techniques (i.e. deep learning) are even being used for self-driving. I’m sure they’re used for some subtasks, like object recognition, but my (several-years-out-of-date and secondhand) understanding is that current projects aren’t actually using it end-to-end; it’s just specific subcomponents. Slightly more data/compute don’t matter much when key limiting pieces aren’t actually using ML.
From the sound of it, they stayed in the lane mainly via some hand-coded image processing which looked for a yellow/white strip surrounded by darker color.
That is what I heard about other research groups but a bit surprising coming from Tesla, I’d imagine things have changed dramatically since then considering this video, albeit insufficient as any sort of safety validation, still demonstrates they’re way beyond just following lane markings. According to Musk they’re pushing hard for end-to-end ML solutions. It would make sense seeing the custom hardware they’ve developed and also the data leverage they have with their massive fleet, combined with over-the-air updates.
It’s certainly plausible that things have changed dramatically, although my default guess is that they haven’t—a pile of hacks can go a surprisingly long way, and the only tricky-looking spot I saw in that video was a short section just after 1:30. And Musk saying that they’re “pushing hard for end-to-end ML” is exactly the sort of thing I’d expect to hear if such a project was not actually finding any traction. I’m sure they’re trying to do it, but ML is finicky at the best of times, and I expect we’d hear it shouted from the rooftops if end-to-end self-driving ML was actually starting to work yet.
To clarify, what I intend to claim is that self-driving cars will not be able to achieve safety comparable to (sober) humans without correctly handling unrecognized objects and other unusual situations. Weird stuff occurs often enough in the real world that handling it will be necessary even for a human-like level of safety.
Is there evidence for this claim? I’ve only ever seen evidence to the contrary
The very first thing in that link:
(emphasis added).
The evidence is: if it were safer in conditions other than the easiest possible conditions, then Tesla would be shouting that fact from the rooftops. Instead, they’re advertising only very limited data about how safe they are in the easy case.
More generally, if unusual situations are the main barrier to full self-driving cars, then we’d expect to see lots of automatic safety features for handling easy cases—automatic braking, cruise control, etc. We do see that, and there’s plenty of evidence that they work great for the easy cases they’re designed for. Tesla’s autopilot is an example of that. But that doesn’t give us self-driving cars; it doesn’t let us take the human out of the loop entirely, and taking the human out of the loop is where the large majority of the value is.
Now, if Tesla (or any self-driving car group) published data showing that autopilot is safer than (sober) humans even in the conditions where most accidents occur, then that would be the sort of thing which would let us take the human out of the loop. That’s the kind of safety we need to actually get the majority of the value from self-driving. I do not see evidence of that, and in this case absence of evidence is pretty strong evidence of absence—because there are companies/groups who would want to share that evidence if they had it.
Elon Musk said a while ago that a fair standard for allowing self-driving cars would be for them to be 10x safer.
Publishing a study that says that Tesla autopilot 10% safer then regular driving wouldn’t be very valuable and there’s huge measurement uncertainty when you have to define what “conditions where most accidents occur” mean.
I would expect us to get that kind of data only once there’s a crash and the automaker wants to convince a jury that the car shouldn’t be blamed.
What makes driving on surface streets so much different than driving on highways such that current state of the art ML techniques wouldn’t be able to handle it with slightly more data and compute?
Unlike natural language processing, AI doctors or household robots, driving seems like a very limited non-AGI-complete task to me because a self-driving car never truly interacts with humans or objects beyond avoiding hitting them.
I would claim all of the above are also required for driving on the highway.
This is secondhand, but… two years ago I worked with a guy who had been on Tesla’s autopilot team. From the sound of it, they stayed in the lane mainly via some hand-coded image processing which looked for a yellow/white strip surrounded by darker color. For most highway driving, that turned out to be good enough.
I’m not sure how much state-of-the-art ML techniques (i.e. deep learning) are even being used for self-driving. I’m sure they’re used for some subtasks, like object recognition, but my (several-years-out-of-date and secondhand) understanding is that current projects aren’t actually using it end-to-end; it’s just specific subcomponents. Slightly more data/compute don’t matter much when key limiting pieces aren’t actually using ML.
That is what I heard about other research groups but a bit surprising coming from Tesla, I’d imagine things have changed dramatically since then considering this video, albeit insufficient as any sort of safety validation, still demonstrates they’re way beyond just following lane markings. According to Musk they’re pushing hard for end-to-end ML solutions. It would make sense seeing the custom hardware they’ve developed and also the data leverage they have with their massive fleet, combined with over-the-air updates.
It’s certainly plausible that things have changed dramatically, although my default guess is that they haven’t—a pile of hacks can go a surprisingly long way, and the only tricky-looking spot I saw in that video was a short section just after 1:30. And Musk saying that they’re “pushing hard for end-to-end ML” is exactly the sort of thing I’d expect to hear if such a project was not actually finding any traction. I’m sure they’re trying to do it, but ML is finicky at the best of times, and I expect we’d hear it shouted from the rooftops if end-to-end self-driving ML was actually starting to work yet.