Just because I feel 99% certain of some information, it does not mean that I am right in 99% of situations. This should be included in calculation.
Even if I were a perfect Bayesian reasoner, most people aren’t. Are we going to solve this one specific situation, or are we creating a general rule that all people will follow? Because it may be better to let 5 people die once than to create a precedent that will allow all kinds of irrational folks go around and kill a random person every time they feel that by doing so they have prevented hypothetical deaths of five other persons.
(If you want to go on and ask whether it is good to kill one person to prevent 99% chance of five people dying, assuming that we are absolutely sure about all these data, and assuming that this sets no kind of precedent or a slippery slope for people in similar circumstances… then the answer is: yes. -- But in real life the probability that such situation happens is much smaller than probability that I misunderstood the situation.)
Sure, but knowing that doesn’t necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I’m still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I’ve just traded away a world in which nobody dies for a world in which someone dies, which isn’t a choice I endorse.
On the other hand, if I don’t pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I’ve just traded away a world in which nobody dies for a world in which five people die, which isn’t a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn’t justify any particular choice, including the choice to not intervene.
The example you listed doesn’t come close to addressing the topic. It isn’t trolley problems in general—it’s a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.
Just because I feel 99% certain of some information, it does not mean that I am right in 99% of situations. This should be included in calculation.
Even if I were a perfect Bayesian reasoner, most people aren’t. Are we going to solve this one specific situation, or are we creating a general rule that all people will follow? Because it may be better to let 5 people die once than to create a precedent that will allow all kinds of irrational folks go around and kill a random person every time they feel that by doing so they have prevented hypothetical deaths of five other persons.
(If you want to go on and ask whether it is good to kill one person to prevent 99% chance of five people dying, assuming that we are absolutely sure about all these data, and assuming that this sets no kind of precedent or a slippery slope for people in similar circumstances… then the answer is: yes. -- But in real life the probability that such situation happens is much smaller than probability that I misunderstood the situation.)
Sure, but knowing that doesn’t necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I’m still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I’ve just traded away a world in which nobody dies for a world in which someone dies, which isn’t a choice I endorse.
On the other hand, if I don’t pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I’ve just traded away a world in which nobody dies for a world in which five people die, which isn’t a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn’t justify any particular choice, including the choice to not intervene.
The example you listed doesn’t come close to addressing the topic. It isn’t trolley problems in general—it’s a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.