Generally, for every ethical problem of type “is it better to do X or Y?”, we can imagine a traffic situation where a barrier with two gates suddenly appears before the fast driving car, one gate inscribed with “if you go through this gate, X will happen”, the other inscribed with “if you go through this gate, Y will happen”.
I think this is a good analogy for those attempts to transfer the trolley problem to self-driving cars.
Practical problems however still exist. I was talking with a woman who grew up in Karachi and she said that the custom over there is that if there aren’t many cars on the road and you are waiting at a red line and a motorcycle tries to stop next to you, you automatically start driving forward. That’s a strategy to avoid being mugged in Karachi.
A driverless car has some advantages in a situation like that because it just ignores a gun if the motorcycle driver pulls a gun. On the other hand, there might be other strategies to stop the driverless car to mug it and you likely want to make it robust against those.
New crime strategies will probably appear soon after self-driving cars become common.
For example, a group of people may block the entire road, forcing the car to stop. A human might recognize this as a criminal attack and choose to just keep going, but a self-driving car will stop. (That is, a strategy that would be “too expensive” against humans may be profitable against self-driving cars.)
You could also block the road using dummies or cardboard silhouettes or whatever the car’s algorithm would recognize as “a human”. You could even use them strategically to make the car crash into a wall, giving the algorithm a dilemma between killing 1 or 2 humans inside, or dozens of “humans” on the road.
EDIT: Ah, I see this is the point the article makes.
Generally, for every ethical problem of type “is it better to do X or Y?”, we can imagine a traffic situation where a barrier with two gates suddenly appears before the fast driving car, one gate inscribed with “if you go through this gate, X will happen”, the other inscribed with “if you go through this gate, Y will happen”.
(Just kidding.)
I think this is a good analogy for those attempts to transfer the trolley problem to self-driving cars.
Practical problems however still exist. I was talking with a woman who grew up in Karachi and she said that the custom over there is that if there aren’t many cars on the road and you are waiting at a red line and a motorcycle tries to stop next to you, you automatically start driving forward. That’s a strategy to avoid being mugged in Karachi.
A driverless car has some advantages in a situation like that because it just ignores a gun if the motorcycle driver pulls a gun. On the other hand, there might be other strategies to stop the driverless car to mug it and you likely want to make it robust against those.
New crime strategies will probably appear soon after self-driving cars become common.
For example, a group of people may block the entire road, forcing the car to stop. A human might recognize this as a criminal attack and choose to just keep going, but a self-driving car will stop. (That is, a strategy that would be “too expensive” against humans may be profitable against self-driving cars.)
You could also block the road using dummies or cardboard silhouettes or whatever the car’s algorithm would recognize as “a human”. You could even use them strategically to make the car crash into a wall, giving the algorithm a dilemma between killing 1 or 2 humans inside, or dozens of “humans” on the road.
EDIT: Ah, I see this is the point the article makes.