Folks will ask questions like “how do we balance the usefulness of energy against the danger to the environment from using energy”. And the answer is “we should never get into a situation where we have to make that choice”.
Of course, anyone who actually gave that answer to that question would be speaking nonsense. In a non-ideal world, sometimes you won’t be able to maximize or minimize two things simultaneously. It may not be possible to never endanger either the passengers or pedestrians, just like it may not be possible to never give up using energy and never endanger the environment. It’s exactly the wrong answer.
Sure, you want to make sure the behavior in a no-win situation isn’t something horrible. It would be bad if the robot realized that it couldn’t avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That’s a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation … and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.
Folks will ask questions like “how do we balance the usefulness of energy against the danger to the environment from using energy”. And the answer is “we should never get into a situation where we have to make that choice”.
Of course, anyone who actually gave that answer to that question would be speaking nonsense. In a non-ideal world, sometimes you won’t be able to maximize or minimize two things simultaneously. It may not be possible to never endanger either the passengers or pedestrians, just like it may not be possible to never give up using energy and never endanger the environment. It’s exactly the wrong answer.
Sure, you want to make sure the behavior in a no-win situation isn’t something horrible. It would be bad if the robot realized that it couldn’t avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That’s a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation … and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.