And the answer that Google seems to have adopted is, “It should see, think, and drive well enough that it never gets into that situation.”
I don’t think designing a car with the idea that it will never get into accidents is a great idea. Even if the smart car itself makes no mistake it can get into a crash and should behave optimally in the crash.
Even outside of smart cars there are design decisions that can increase the safety of the car owner at the expense of the passangers of a car you crash into.
I don’t think designing a car with the idea that it will never get into accidents is a great idea.
I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)
But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.
Part of my point above is that humans can’t even agree with one another what the right thing to do in certain moral crises is. That’s why we have things like the Trolley Problem. But we can agree, if we look at the evidence, that what gets people into crash situations is itself avoidable — things like distracted, drunken, aggressive, or sleepy driving. And the gain of moving from human drivers to robot cars is not that robots offer perfect saintly solutions to crash situations — but that they get in fewer crash situations.
I don’t think designing a car with the idea that it will never get into accidents is a great idea. Even if the smart car itself makes no mistake it can get into a crash and should behave optimally in the crash.
Even outside of smart cars there are design decisions that can increase the safety of the car owner at the expense of the passangers of a car you crash into.
I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)
But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.
Part of my point above is that humans can’t even agree with one another what the right thing to do in certain moral crises is. That’s why we have things like the Trolley Problem. But we can agree, if we look at the evidence, that what gets people into crash situations is itself avoidable — things like distracted, drunken, aggressive, or sleepy driving. And the gain of moving from human drivers to robot cars is not that robots offer perfect saintly solutions to crash situations — but that they get in fewer crash situations.