Self-driving cars offer a straightforward, practical way to talk about AI ethics with people who’ve never written a line of code.
For instance, folks will ask (and have asked) questions like, “If a self-driving car has to choose between saving its owner’s life and saving the lives of five pedestrians, which should it choose?” With overtones of distrust (I don’t want my car to betray me!), class anxiety (I don’t want some rich fuck’s car to choose to run my kids over to save him!), and anti-capitalism/anti-nerdism (No matter what those rich dorks at Google choose, it’ll be wrong and they should be sued to death!).
And the answer that Google seems to have adopted is, “It should see, think, and drive well enough that it never gets into that situation.”
Which is exactly the right answer!
Almost all of the benefit of programming machines to make moral decisions is going to amount to avoiding dilemmas — not deciding which horn to impale oneself on. Humans end up in dilemmas (“Do I hit the wall and kill myself, or hit the kids on the sidewalk and kill them?”) when we don’t see the dilemma coming and avoid it. Machines with better senses and more predictive capacity don’t have to have that problem.
Folks will ask questions like “how do we balance the usefulness of energy against the danger to the environment from using energy”. And the answer is “we should never get into a situation where we have to make that choice”.
Of course, anyone who actually gave that answer to that question would be speaking nonsense. In a non-ideal world, sometimes you won’t be able to maximize or minimize two things simultaneously. It may not be possible to never endanger either the passengers or pedestrians, just like it may not be possible to never give up using energy and never endanger the environment. It’s exactly the wrong answer.
Sure, you want to make sure the behavior in a no-win situation isn’t something horrible. It would be bad if the robot realized that it couldn’t avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That’s a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation … and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.
And the answer that Google seems to have adopted is, “It should see, think, and drive well enough that it never gets into that situation.”
I don’t think designing a car with the idea that it will never get into accidents is a great idea. Even if the smart car itself makes no mistake it can get into a crash and should behave optimally in the crash.
Even outside of smart cars there are design decisions that can increase the safety of the car owner at the expense of the passangers of a car you crash into.
I don’t think designing a car with the idea that it will never get into accidents is a great idea.
I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)
But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.
Part of my point above is that humans can’t even agree with one another what the right thing to do in certain moral crises is. That’s why we have things like the Trolley Problem. But we can agree, if we look at the evidence, that what gets people into crash situations is itself avoidable — things like distracted, drunken, aggressive, or sleepy driving. And the gain of moving from human drivers to robot cars is not that robots offer perfect saintly solutions to crash situations — but that they get in fewer crash situations.
Self-driving cars offer a straightforward, practical way to talk about AI ethics with people who’ve never written a line of code.
For instance, folks will ask (and have asked) questions like, “If a self-driving car has to choose between saving its owner’s life and saving the lives of five pedestrians, which should it choose?” With overtones of distrust (I don’t want my car to betray me!), class anxiety (I don’t want some rich fuck’s car to choose to run my kids over to save him!), and anti-capitalism/anti-nerdism (No matter what those rich dorks at Google choose, it’ll be wrong and they should be sued to death!).
And the answer that Google seems to have adopted is, “It should see, think, and drive well enough that it never gets into that situation.”
Which is exactly the right answer!
Almost all of the benefit of programming machines to make moral decisions is going to amount to avoiding dilemmas — not deciding which horn to impale oneself on. Humans end up in dilemmas (“Do I hit the wall and kill myself, or hit the kids on the sidewalk and kill them?”) when we don’t see the dilemma coming and avoid it. Machines with better senses and more predictive capacity don’t have to have that problem.
Folks will ask questions like “how do we balance the usefulness of energy against the danger to the environment from using energy”. And the answer is “we should never get into a situation where we have to make that choice”.
Of course, anyone who actually gave that answer to that question would be speaking nonsense. In a non-ideal world, sometimes you won’t be able to maximize or minimize two things simultaneously. It may not be possible to never endanger either the passengers or pedestrians, just like it may not be possible to never give up using energy and never endanger the environment. It’s exactly the wrong answer.
Sure, you want to make sure the behavior in a no-win situation isn’t something horrible. It would be bad if the robot realized that it couldn’t avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That’s a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation … and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.
I don’t think designing a car with the idea that it will never get into accidents is a great idea. Even if the smart car itself makes no mistake it can get into a crash and should behave optimally in the crash.
Even outside of smart cars there are design decisions that can increase the safety of the car owner at the expense of the passangers of a car you crash into.
I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)
But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.
Part of my point above is that humans can’t even agree with one another what the right thing to do in certain moral crises is. That’s why we have things like the Trolley Problem. But we can agree, if we look at the evidence, that what gets people into crash situations is itself avoidable — things like distracted, drunken, aggressive, or sleepy driving. And the gain of moving from human drivers to robot cars is not that robots offer perfect saintly solutions to crash situations — but that they get in fewer crash situations.