Sure, you want to make sure the behavior in a no-win situation isn’t something horrible. It would be bad if the robot realized that it couldn’t avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That’s a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation … and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.
Sure, you want to make sure the behavior in a no-win situation isn’t something horrible. It would be bad if the robot realized that it couldn’t avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That’s a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation … and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.