My understanding was that the black boxes exist but all said simply that ‘the pedal was pushed so the car went faster’; the boxes can only record what they record, and if the signals or messages themselves were false, they’re not going to pinpoint the true cause. This was why Toyota was tearing through the electrical and computer systems to see how a false pedal signal could be created. Nothing was found: http://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehicle_recalls
On February 8, 2011, the NHTSA, in collaboration with NASA, released its findings into the investigation on the Toyota drive-by-wire throttle system. After a 10-month search, NASA and NHTSA scientists found no electronic defect in Toyota vehicles.[28] Driver error or pedal misapplication was found responsible for most of the incidents.[29] The report ended stating, “Our conclusion is Toyota’s problems were mechanical, not electrical.” This included sticking accelerator pedals, and pedals caught under floor mats.[30]
On black boxes:
In August 2010, the Wall St. Journal reported that experts at the National Highway Traffic Safety Administration had examined the “black boxes” of 58 vehicles involved in sudden-acceleration reports. The study found that in 35 of the cases, the brakes weren’t applied at the time of the crash. In nine other cases in the same study, the brakes were used only at the last moment before impact.[222]
As far as autonomous car adoption rates go:
Also, Toyota recently settled most of the suits for $1.1 billion, although a few smaller ones are outstanding.
$1.1b is worth a lot of risk aversion.
We shall see?
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
$1.1b is worth a lot of risk aversion.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.
My understanding was that the black boxes exist but all said simply that ‘the pedal was pushed so the car went faster’; the boxes can only record what they record, and if the signals or messages themselves were false, they’re not going to pinpoint the true cause. This was why Toyota was tearing through the electrical and computer systems to see how a false pedal signal could be created. Nothing was found: http://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehicle_recalls
On black boxes:
As far as autonomous car adoption rates go:
$1.1b is worth a lot of risk aversion.
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.