Conventional mortality would dictate that the car minimize global loss of life, followed by permanent brain damage, permanent body damage. I think in the future that other algorithms will be illegal but existent.
However. The lives each car would have the most effect on would be those inside of it. So in most situations all actions would be directed towards said persons.
The issue is that it could create bad incentives. E.g. motorcyclists not wearing helmets and even acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash. Or people stop buying safer cars because they are always chosen as “targets” by self-driving cars to crash into, making them statistically less safe.
I don’t think the concerns are large enough to worry about, but hypothetically it’s an interesting dilemma.
When I was a dumb kid, my friends and I regularly jaywalked (jayran?) across 3 lanes at a time of high speed traffic, just to get to a nicer place for lunch. Don’t underestimate the populations of stupid and selfish people in the world, or the propensity to change behavior in response to changing incentives.
On the other hand, I’m not sure how the incentives here will change. Any self-driving car is going to be speckled with cameras, and “I know it will slam on the brakes or swerve to avoid me” might not be much temptation when followed with “then it will send my picture to the police”.
Conventional mortality would dictate that the car minimize global loss of life
I don’t know about that. “Conventional morality” is not a well-formed or a coherent system and there are a lot of situations where other factors would override minimizing loss of life.
In the self-driving car example, say “getting to your destination”. Keep in mind that the mere act of the car getting out on the road increases the expected number of resulting deaths.
No one pedestrian is more likely to die as a result of an accident involving a particular car than the owner of that car, though, which I think is what Cube meant.
True, but that doesn’t change the fact that if you’re at risk of crashing into a pedestrian, your car will act to save the pedestrian, rather than you.
Conventional mortality would dictate that the car minimize global loss of life, followed by permanent brain damage, permanent body damage. I think in the future that other algorithms will be illegal but existent.
However. The lives each car would have the most effect on would be those inside of it. So in most situations all actions would be directed towards said persons.
The issue is that it could create bad incentives. E.g. motorcyclists not wearing helmets and even acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash. Or people stop buying safer cars because they are always chosen as “targets” by self-driving cars to crash into, making them statistically less safe.
I don’t think the concerns are large enough to worry about, but hypothetically it’s an interesting dilemma.
When I was a dumb kid, my friends and I regularly jaywalked (jayran?) across 3 lanes at a time of high speed traffic, just to get to a nicer place for lunch. Don’t underestimate the populations of stupid and selfish people in the world, or the propensity to change behavior in response to changing incentives.
On the other hand, I’m not sure how the incentives here will change. Any self-driving car is going to be speckled with cameras, and “I know it will slam on the brakes or swerve to avoid me” might not be much temptation when followed with “then it will send my picture to the police”.
Aaaaand now you brought privacy controversy into the mix.
In a completely reasonable way. If your driving strategy involves making problems for other people, that’s intrinsically a non-private activity.
Ah, an interesting possibility. Self-driving cars can be gamed. If I know a car will always swerve to avoid me, I can manipulate it.
I doubt if self-driving cars would have to choose between crashing into two vehicles often enough for these considerations to show up in statistics.
I don’t know about that. “Conventional morality” is not a well-formed or a coherent system and there are a lot of situations where other factors would override minimizing loss of life.
What kind of things override loss of life and and can be widely agreed upon?
Going to war, for example.
Or consider involuntary organ harvesting.
In the self-driving car example, say “getting to your destination”. Keep in mind that the mere act of the car getting out on the road increases the expected number of resulting deaths.
I disagree. The driver of a car is much less in danger than a pedestrian.
No one pedestrian is more likely to die as a result of an accident involving a particular car than the owner of that car, though, which I think is what Cube meant.
True, but that doesn’t change the fact that if you’re at risk of crashing into a pedestrian, your car will act to save the pedestrian, rather than you.