I can’t agree. AI—yes, even mundane old domain-specific AI—has all sorts of potential weird failure modes. (Not an original observation, just conveying the majority opinion of the field.)
In this instance “weird failure mode” means “incident causing many deaths at once, probable enough to be a significant risk factor but rare enough that it takes a lot more autonomous miles in much more realistic circumstances to measure who the safer driver is”.
I can’t agree. AI—yes, even mundane old domain-specific AI—has all sorts of potential weird failure modes. (Not an original observation, just conveying the majority opinion of the field.)
Yes, but humans also have all sorts of weird failure modes. We’re not looking for perfection here, just better than humans.
In this instance “weird failure mode” means “incident causing many deaths at once, probable enough to be a significant risk factor but rare enough that it takes a lot more autonomous miles in much more realistic circumstances to measure who the safer driver is”.
Yup, humans have weird failure modes but they don’t occur all over the country simultaneously at 3:27pm on Wednesday.