Can you name any major legal issue which has passed through legislatures that fast? Gay marriage, regulating smoking, anything?
Those are social issues with minimal short-term economic impact. I don’t think they’re really the right reference class to use (although if they turn out to be, I would agree that we’re in trouble). Unfortunately, I can’t think of things that are in quite the right reference class in modern times. Perhaps earlier examples would be early automobiles, passenger airplanes (and Concorde), the electric grid, that sort of thing. Most of these seem to have followed a route of “permitted until proven dangerous or undesirable”, but unfortunately they also mostly emerged too long ago, and the social climate has definitely now become more litigious and risk-averse, so I can see how you might find the comparison unpersuasive.
If you’re looking for general issues, it’s difficult to judge. I think you’d agree that individuals states do pass some reforms rather quickly. For example California institutes fuel efficiency regulations without too much debate, I think. I do admit that states almost never sync up their regulations completely unless the federal government takes it up, but this may be a special case, as enforcement would prove problematic if some states chose to forbid driverless cars while others permitted them.
One could’ve said something similar about tobacco.
Not really—health issues are inherently difficult to isolate. We still seem pretty unsure about most dietary things, for example. On the other hand, the feedback from driverless cars should be a lot more clear and immediate.
Unfortunately, I can’t think of things that are in quite the right reference class in modern times. Perhaps earlier examples would be early automobiles, passenger airplanes (and Concorde), the electric grid, that sort of thing. Most of these seem to have followed a route of “permitted until proven dangerous or undesirable”, but unfortunately they also mostly emerged too long ago, and the social climate has definitely now become more litigious and risk-averse, so I can see how you might find the comparison unpersuasive.
They also were all very slow emergences, decades from first attempts to any market penetration you could call widespread, with considerable legislation slowing them down at points—one thinks of the story that cars in cities needed someone walking in front of them waving a flag (not sure if that was true or apocryphal).
For example California institutes fuel efficiency regulations without too much debate, I think.
I assume most of those are going to be follow-up regulations, additional tightenings of the screw.
On the other hand, the feedback from driverless cars should be a lot more clear and immediate.
“Who is at fault in this accident?” “Not me, officer!” One thinks of the Toyota acceleration issues, where it may just have been the elderly drivers panicking & blaming the car, but where the lawsuits are probably still going on.
One thinks of the Toyota acceleration issues, where it may just have been the elderly drivers panicking & blaming the car, but where the lawsuits are probably still going on.
I’m not sure if I’ve seen this suggested, but with all the sensors these things have for driving, wouldn’t it be trivial to have a “black box” installed that recorded exactly what happened in the event of an accident? There might be some privacy concerns, etc., but it seems like it’d make things a lot easier (specifically, even if companies are held to be liable, if there are few enough errors, litigation could still be decently cheap).
(Also, Toyota recently settled most of the suits for $1.1 billion, although a few smaller ones are outstanding. But that’s a good point.)
Anyway, I guess overall I’m just a bit more optimistic about the combination of potential immense benefits from the technology with politicians being pragmatic. We shall see? (If that seem a bit more pessimistic than the position I’ve been arguing, take that as me updating on your pessimism.)
My understanding was that the black boxes exist but all said simply that ‘the pedal was pushed so the car went faster’; the boxes can only record what they record, and if the signals or messages themselves were false, they’re not going to pinpoint the true cause. This was why Toyota was tearing through the electrical and computer systems to see how a false pedal signal could be created. Nothing was found: http://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehicle_recalls
On February 8, 2011, the NHTSA, in collaboration with NASA, released its findings into the investigation on the Toyota drive-by-wire throttle system. After a 10-month search, NASA and NHTSA scientists found no electronic defect in Toyota vehicles.[28] Driver error or pedal misapplication was found responsible for most of the incidents.[29] The report ended stating, “Our conclusion is Toyota’s problems were mechanical, not electrical.” This included sticking accelerator pedals, and pedals caught under floor mats.[30]
On black boxes:
In August 2010, the Wall St. Journal reported that experts at the National Highway Traffic Safety Administration had examined the “black boxes” of 58 vehicles involved in sudden-acceleration reports. The study found that in 35 of the cases, the brakes weren’t applied at the time of the crash. In nine other cases in the same study, the brakes were used only at the last moment before impact.[222]
As far as autonomous car adoption rates go:
Also, Toyota recently settled most of the suits for $1.1 billion, although a few smaller ones are outstanding.
$1.1b is worth a lot of risk aversion.
We shall see?
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
$1.1b is worth a lot of risk aversion.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.
Those are social issues with minimal short-term economic impact. I don’t think they’re really the right reference class to use (although if they turn out to be, I would agree that we’re in trouble). Unfortunately, I can’t think of things that are in quite the right reference class in modern times. Perhaps earlier examples would be early automobiles, passenger airplanes (and Concorde), the electric grid, that sort of thing. Most of these seem to have followed a route of “permitted until proven dangerous or undesirable”, but unfortunately they also mostly emerged too long ago, and the social climate has definitely now become more litigious and risk-averse, so I can see how you might find the comparison unpersuasive.
If you’re looking for general issues, it’s difficult to judge. I think you’d agree that individuals states do pass some reforms rather quickly. For example California institutes fuel efficiency regulations without too much debate, I think. I do admit that states almost never sync up their regulations completely unless the federal government takes it up, but this may be a special case, as enforcement would prove problematic if some states chose to forbid driverless cars while others permitted them.
Not really—health issues are inherently difficult to isolate. We still seem pretty unsure about most dietary things, for example. On the other hand, the feedback from driverless cars should be a lot more clear and immediate.
They also were all very slow emergences, decades from first attempts to any market penetration you could call widespread, with considerable legislation slowing them down at points—one thinks of the story that cars in cities needed someone walking in front of them waving a flag (not sure if that was true or apocryphal).
I assume most of those are going to be follow-up regulations, additional tightenings of the screw.
“Who is at fault in this accident?” “Not me, officer!” One thinks of the Toyota acceleration issues, where it may just have been the elderly drivers panicking & blaming the car, but where the lawsuits are probably still going on.
I’m not sure if I’ve seen this suggested, but with all the sensors these things have for driving, wouldn’t it be trivial to have a “black box” installed that recorded exactly what happened in the event of an accident? There might be some privacy concerns, etc., but it seems like it’d make things a lot easier (specifically, even if companies are held to be liable, if there are few enough errors, litigation could still be decently cheap).
(Also, Toyota recently settled most of the suits for $1.1 billion, although a few smaller ones are outstanding. But that’s a good point.)
Anyway, I guess overall I’m just a bit more optimistic about the combination of potential immense benefits from the technology with politicians being pragmatic. We shall see? (If that seem a bit more pessimistic than the position I’ve been arguing, take that as me updating on your pessimism.)
My understanding was that the black boxes exist but all said simply that ‘the pedal was pushed so the car went faster’; the boxes can only record what they record, and if the signals or messages themselves were false, they’re not going to pinpoint the true cause. This was why Toyota was tearing through the electrical and computer systems to see how a false pedal signal could be created. Nothing was found: http://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehicle_recalls
On black boxes:
As far as autonomous car adoption rates go:
$1.1b is worth a lot of risk aversion.
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.