Somehow, I doubt I could achieve any more than 1% confidence that, say, the best plan to save 5 children in a burning building was to stab the fireman with a piece of glass and knock his ladder over and pull him off it so his body fell nearly straight down and could serve a cushion for the 5 children, who would each get off the body soon enough to let the next one follow. Actually, I was not only to assume that that is the best plan, but that it is certain to work, and if I don’t carry it out, the 5 children will certainly die, but not the fireman or me.
Or, alternately, that I’m on a motorboat, and there’s a shark in the water who will eat 5 people unless I hit the accelerator soon enough and hard enough that one of my existing passengers will certainly be knocked off the back, and will certainly be eaten by the shark (perhaps it was a second shark, which would raise the likelihood that I couldn’t do some fancy boatwork to get ’em all). I do not have time to tell anyone to hold on—I absolutely MUST goose the gas to get there that extra half second early that somehow makes the difference between all five being eaten and none of the five being eaten.
So your issue isn’t actually with (moral) reasoning under uncertainty or the trolley problem in general, it’s just with highly specific, really bad examples. Gotcha.
I think in general, if you find your plans to be complicated, involve causing someone else a large up-front cost, and you have very high confidence in the plan, the moral thing is to audit your certainty.
Just because I feel 99% certain of some information, it does not mean that I am right in 99% of situations. This should be included in calculation.
Even if I were a perfect Bayesian reasoner, most people aren’t. Are we going to solve this one specific situation, or are we creating a general rule that all people will follow? Because it may be better to let 5 people die once than to create a precedent that will allow all kinds of irrational folks go around and kill a random person every time they feel that by doing so they have prevented hypothetical deaths of five other persons.
(If you want to go on and ask whether it is good to kill one person to prevent 99% chance of five people dying, assuming that we are absolutely sure about all these data, and assuming that this sets no kind of precedent or a slippery slope for people in similar circumstances… then the answer is: yes. -- But in real life the probability that such situation happens is much smaller than probability that I misunderstood the situation.)
Sure, but knowing that doesn’t necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I’m still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I’ve just traded away a world in which nobody dies for a world in which someone dies, which isn’t a choice I endorse.
On the other hand, if I don’t pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I’ve just traded away a world in which nobody dies for a world in which five people die, which isn’t a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn’t justify any particular choice, including the choice to not intervene.
The example you listed doesn’t come close to addressing the topic. It isn’t trolley problems in general—it’s a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.
Okay… so, you are 99 percent certain of your information. Does that change your answer?
Somehow, I doubt I could achieve any more than 1% confidence that, say, the best plan to save 5 children in a burning building was to stab the fireman with a piece of glass and knock his ladder over and pull him off it so his body fell nearly straight down and could serve a cushion for the 5 children, who would each get off the body soon enough to let the next one follow. Actually, I was not only to assume that that is the best plan, but that it is certain to work, and if I don’t carry it out, the 5 children will certainly die, but not the fireman or me.
Or, alternately, that I’m on a motorboat, and there’s a shark in the water who will eat 5 people unless I hit the accelerator soon enough and hard enough that one of my existing passengers will certainly be knocked off the back, and will certainly be eaten by the shark (perhaps it was a second shark, which would raise the likelihood that I couldn’t do some fancy boatwork to get ’em all). I do not have time to tell anyone to hold on—I absolutely MUST goose the gas to get there that extra half second early that somehow makes the difference between all five being eaten and none of the five being eaten.
So your issue isn’t actually with (moral) reasoning under uncertainty or the trolley problem in general, it’s just with highly specific, really bad examples. Gotcha.
I think in general, if you find your plans to be complicated, involve causing someone else a large up-front cost, and you have very high confidence in the plan, the moral thing is to audit your certainty.
Just because I feel 99% certain of some information, it does not mean that I am right in 99% of situations. This should be included in calculation.
Even if I were a perfect Bayesian reasoner, most people aren’t. Are we going to solve this one specific situation, or are we creating a general rule that all people will follow? Because it may be better to let 5 people die once than to create a precedent that will allow all kinds of irrational folks go around and kill a random person every time they feel that by doing so they have prevented hypothetical deaths of five other persons.
(If you want to go on and ask whether it is good to kill one person to prevent 99% chance of five people dying, assuming that we are absolutely sure about all these data, and assuming that this sets no kind of precedent or a slippery slope for people in similar circumstances… then the answer is: yes. -- But in real life the probability that such situation happens is much smaller than probability that I misunderstood the situation.)
Sure, but knowing that doesn’t necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I’m still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I’ve just traded away a world in which nobody dies for a world in which someone dies, which isn’t a choice I endorse.
On the other hand, if I don’t pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I’ve just traded away a world in which nobody dies for a world in which five people die, which isn’t a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn’t justify any particular choice, including the choice to not intervene.
The example you listed doesn’t come close to addressing the topic. It isn’t trolley problems in general—it’s a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.