I don’t want humans to make decisions where they kill one person to save another. The trolley problem feels bad to us because, in that situation usually, its never that clear. Omega is never leaning over your shoulder, explaining to you that killing the fat man will save those people- you just have to make a guess, and human guesses can be wrong. What I suspect humans are doing is a hidden probability calculation that says “well theres probably a chance of x that I’ll save those people, which isn’t high enough to chance it”. Theres an argument to be had that if theres a 10% chance killing one person could save 11, we should actually not kill the original person. This is because utility maximisation over probabilistic calculations only makes sense if I’m repeatedly making said calculations. I’m only going to end up with a profit if I’ve done this quite a few times- around 10. In all likelihood whats going to happen is that I’m going to have ended up murdering someone.
I’d be extremely worried about a machine that was willing to kill people to save others, because its calculations would have to be correct- a mistake could be horrifying for us all. The advantage of human calculation is we are risk averse, and being risk averse is usually a good thing.
I strongly agree with this. Humans should be morally discouraged from making life or death decisions for other humans because of human fallibility. Individuals not only do not know enough in general to make flash decisions correctly about these kind of probabilities, but also do not share enough values to make these decisions. The rules need to say you can’t volunteer other people to die for your cause.
Humans make life-and-death decisions for other humans every day. The President decides to bomb Libya or enter Darfur to prevent a genocide. The FDA decides to approve or ban a drug. The EPA decides how to weigh deaths from carcinogens produced by industry, vs. jobs. The DOT decides how to weigh travel time vs. travel deaths.
When you buy a car that’s cheaper than a Volvo, or drive over the speed limit, or build a house that cannot withstand a magnitude 9 earthquake, you are making a life and death decision.
yes, and this is a series of examples of decisions that almost everyone is discouraged from making themselves. Other examples include a police officer’s decision to use lethal force or whether a firefighter goes back into the collapsing building one more time. These are people specifically trained and encouraged to be better at making these judgments, and EVEN then we still prefer the police officer to always take the non-lethal path. The average person is and I think should in general be discouraged from making life or death decisions for other people.
I don’t want humans to make decisions where they kill one person to save another. The trolley problem feels bad to us because, in that situation usually, its never that clear. Omega is never leaning over your shoulder, explaining to you that killing the fat man will save those people- you just have to make a guess, and human guesses can be wrong. What I suspect humans are doing is a hidden probability calculation that says “well theres probably a chance of x that I’ll save those people, which isn’t high enough to chance it”. Theres an argument to be had that if theres a 10% chance killing one person could save 11, we should actually not kill the original person. This is because utility maximisation over probabilistic calculations only makes sense if I’m repeatedly making said calculations. I’m only going to end up with a profit if I’ve done this quite a few times- around 10. In all likelihood whats going to happen is that I’m going to have ended up murdering someone.
I’d be extremely worried about a machine that was willing to kill people to save others, because its calculations would have to be correct- a mistake could be horrifying for us all. The advantage of human calculation is we are risk averse, and being risk averse is usually a good thing.
I strongly agree with this. Humans should be morally discouraged from making life or death decisions for other humans because of human fallibility. Individuals not only do not know enough in general to make flash decisions correctly about these kind of probabilities, but also do not share enough values to make these decisions. The rules need to say you can’t volunteer other people to die for your cause.
Humans make life-and-death decisions for other humans every day. The President decides to bomb Libya or enter Darfur to prevent a genocide. The FDA decides to approve or ban a drug. The EPA decides how to weigh deaths from carcinogens produced by industry, vs. jobs. The DOT decides how to weigh travel time vs. travel deaths.
Note that those are all decisions which have been off-loaded to large institutions.
People rarely make overt life and death decisions in their private lives.
Overt is the key word.
When you buy a car that’s cheaper than a Volvo, or drive over the speed limit, or build a house that cannot withstand a magnitude 9 earthquake, you are making a life and death decision.
no. The phrase “life or death decision” does not mean this and this is not how it’s used.
yes, and this is a series of examples of decisions that almost everyone is discouraged from making themselves. Other examples include a police officer’s decision to use lethal force or whether a firefighter goes back into the collapsing building one more time. These are people specifically trained and encouraged to be better at making these judgments, and EVEN then we still prefer the police officer to always take the non-lethal path. The average person is and I think should in general be discouraged from making life or death decisions for other people.