I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
$1.1b is worth a lot of risk aversion.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.
I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.