There isn’t enough data to say that autonomous vehicles are safer than human drivers. On the order of 10,000-20,000 fatal accidents a year out of, I don’t know, maybe 1,000,000,000 trips per year means you would need about ten million trips by autonomous vehicles before you have enough data to say anything. I also note that nobody AFAIK takes autonomous vehicles out at night or in the rain.
That said, I agree with your general point. A similar, but better, example is automated air traffic control and autopilots. We already rely on software to present all the data to air traffic controllers and to pilots that they rely on not to crash into each other; software errors or power failures can already lead to deaths.
No need to use made-up numbers when we have real ones. In the US in 2007 there were 37,248 fatal crashes and 3.030 trillion vehicle-miles driven. (Source). That’s one fatal accident per 81.35 million miles. So, solving a Poisson distribution for P(E|H) >= 0.95, where the evidence is the number of miles driven by autonomous vehicles without a fatal accident:
λ^k * e^-λ / k! = .05; k = 0
e^-λ = .05
λ = 2.996
2.996 * 81.35E6 = 243.7 million miles required for statistical significance.
This, however, is only frequentist reasoning. I would actually be inclined to trust autonomous vehicles after considerably less testing, because I consider P(H) to be a priori quite high.
I can’t agree. AI—yes, even mundane old domain-specific AI—has all sorts of potential weird failure modes. (Not an original observation, just conveying the majority opinion of the field.)
In this instance “weird failure mode” means “incident causing many deaths at once, probable enough to be a significant risk factor but rare enough that it takes a lot more autonomous miles in much more realistic circumstances to measure who the safer driver is”.
There isn’t enough data to say that autonomous vehicles are safer than human drivers. On the order of 10,000-20,000 fatal accidents a year out of, I don’t know, maybe 1,000,000,000 trips per year means you would need about ten million trips by autonomous vehicles before you have enough data to say anything. I also note that nobody AFAIK takes autonomous vehicles out at night or in the rain.
That said, I agree with your general point. A similar, but better, example is automated air traffic control and autopilots. We already rely on software to present all the data to air traffic controllers and to pilots that they rely on not to crash into each other; software errors or power failures can already lead to deaths.
No need to use made-up numbers when we have real ones. In the US in 2007 there were 37,248 fatal crashes and 3.030 trillion vehicle-miles driven. (Source). That’s one fatal accident per 81.35 million miles. So, solving a Poisson distribution for P(E|H) >= 0.95, where the evidence is the number of miles driven by autonomous vehicles without a fatal accident:
λ^k * e^-λ / k! = .05; k = 0
e^-λ = .05
λ = 2.996
2.996 * 81.35E6 = 243.7 million miles required for statistical significance.
This, however, is only frequentist reasoning. I would actually be inclined to trust autonomous vehicles after considerably less testing, because I consider P(H) to be a priori quite high.
I can’t agree. AI—yes, even mundane old domain-specific AI—has all sorts of potential weird failure modes. (Not an original observation, just conveying the majority opinion of the field.)
Yes, but humans also have all sorts of weird failure modes. We’re not looking for perfection here, just better than humans.
In this instance “weird failure mode” means “incident causing many deaths at once, probable enough to be a significant risk factor but rare enough that it takes a lot more autonomous miles in much more realistic circumstances to measure who the safer driver is”.
Yup, humans have weird failure modes but they don’t occur all over the country simultaneously at 3:27pm on Wednesday.