My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications. We’re saying there are five people on this track and one person on this other track, with no explanation of why? Unless the answer really is “quantum fluctuations”, utilitarianism demands considering the long-term implications of that explanation. My utility function isn’t “save as many lives as possible during the next five minutes”, it’s (still oversimplifying) “save as many lives as possible”, and figuring out what causes five people to step in front of a moving trolley is critical to that! There will surely still be trolleys running tomorrow, and next month, and next year.
For example, if the reason five people feel free to step in front of a moving trolley is “because quasi-utilitarian suckers won’t let the trolley hit us anyway”, then we’ve got a Newcomb problem buried here too. In that case, the reason to keep the trolley on its scheduled track isn’t because that involves fewer flicks of a switch, it’s because “maintenance guy working on an unused track” is not a situation we want to discourage but “crowd of trespassers pressuring us into considering killing him” is.
If you don’t know anything about them, there is some chance that deciding to pull the switch will change the incentives for people to feel free to step in front of trolleys.
Also, consider precommitting. You precommit to pull or not pull the switch based on whether pulling the switch overall saves more people, including the change in people’s actions formed by the existence of your precommitment. (You could even model some deontological rules as a form of precommitting.) Whether it is good to precommit inherently depends on the long-term implications of your action, unless you want to have separate precommitments for quantum fluctuation trolleys and normal trolleys that people choose to walk in front of.
And of course it may turn out that your precommitment ends up making people worse off in this situation (more people die if you don’t switch the trolley), but that’s how precommitments work—having to follow through on the precommitment could leave things worse off without making the precommitment a bad idea.
That doesn’t work unless you can make separate precommitments for switches that nobody knows about and switches that people might know about. You probably are unable to do that, for the same reason that you are unable to have separate precommitments for quantum fluctuations and normal trolleys.
Also, that assumption is not enough. Similarly to the reasoning behind superrationality, people can figure out what your reasoning is whether or not you tell them. You’d have to assume that nobody knows what your ethical system is, plus one wide scale assumption such as assuming that nobody knows about the existence of utilitarians (or of deontologists whose rules are modelled as utilitarian precommitment.)
The purpose of the trolley problem is to consider the clash between deontological principles and allowing harm to occur. So the best situation to consider is one which sets up the purest clash possible. Of course, you can always consider multiple variants of the trolley problem if you then want to explore other aspects.
My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications. We’re saying there are five people on this track and one person on this other track, with no explanation of why? Unless the answer really is “quantum fluctuations”, utilitarianism demands considering the long-term implications of that explanation. My utility function isn’t “save as many lives as possible during the next five minutes”, it’s (still oversimplifying) “save as many lives as possible”, and figuring out what causes five people to step in front of a moving trolley is critical to that! There will surely still be trolleys running tomorrow, and next month, and next year.
For example, if the reason five people feel free to step in front of a moving trolley is “because quasi-utilitarian suckers won’t let the trolley hit us anyway”, then we’ve got a Newcomb problem buried here too. In that case, the reason to keep the trolley on its scheduled track isn’t because that involves fewer flicks of a switch, it’s because “maintenance guy working on an unused track” is not a situation we want to discourage but “crowd of trespassers pressuring us into considering killing him” is.
“My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications.”
While limited knowledge is inconvenient, that’s reality. We have limited knowledge. You place your bets and take your chances.
In the least convenient possible world, you happen upon these people and don’t know anything about them, their past, or their reasons.
If you don’t know anything about them, there is some chance that deciding to pull the switch will change the incentives for people to feel free to step in front of trolleys.
Also, consider precommitting. You precommit to pull or not pull the switch based on whether pulling the switch overall saves more people, including the change in people’s actions formed by the existence of your precommitment. (You could even model some deontological rules as a form of precommitting.) Whether it is good to precommit inherently depends on the long-term implications of your action, unless you want to have separate precommitments for quantum fluctuation trolleys and normal trolleys that people choose to walk in front of.
And of course it may turn out that your precommitment ends up making people worse off in this situation (more people die if you don’t switch the trolley), but that’s how precommitments work—having to follow through on the precommitment could leave things worse off without making the precommitment a bad idea.
Don’t know if this is “least convenient world” or “most convenient world” territory, but I think it fits in the spirit of the problem:
No one will know that a switch was pulled except you.
That doesn’t work unless you can make separate precommitments for switches that nobody knows about and switches that people might know about. You probably are unable to do that, for the same reason that you are unable to have separate precommitments for quantum fluctuations and normal trolleys.
Also, that assumption is not enough. Similarly to the reasoning behind superrationality, people can figure out what your reasoning is whether or not you tell them. You’d have to assume that nobody knows what your ethical system is, plus one wide scale assumption such as assuming that nobody knows about the existence of utilitarians (or of deontologists whose rules are modelled as utilitarian precommitment.)
The purpose of the trolley problem is to consider the clash between deontological principles and allowing harm to occur. So the best situation to consider is one which sets up the purest clash possible. Of course, you can always consider multiple variants of the trolley problem if you then want to explore other aspects.