Sure, but knowing that doesn’t necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I’m still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I’ve just traded away a world in which nobody dies for a world in which someone dies, which isn’t a choice I endorse.
On the other hand, if I don’t pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I’ve just traded away a world in which nobody dies for a world in which five people die, which isn’t a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn’t justify any particular choice, including the choice to not intervene.
The example you listed doesn’t come close to addressing the topic. It isn’t trolley problems in general—it’s a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.
Sure, but knowing that doesn’t necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I’m still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I’ve just traded away a world in which nobody dies for a world in which someone dies, which isn’t a choice I endorse.
On the other hand, if I don’t pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I’ve just traded away a world in which nobody dies for a world in which five people die, which isn’t a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn’t justify any particular choice, including the choice to not intervene.
The example you listed doesn’t come close to addressing the topic. It isn’t trolley problems in general—it’s a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.