If you don’t know anything about them, there is some chance that deciding to pull the switch will change the incentives for people to feel free to step in front of trolleys.
Also, consider precommitting. You precommit to pull or not pull the switch based on whether pulling the switch overall saves more people, including the change in people’s actions formed by the existence of your precommitment. (You could even model some deontological rules as a form of precommitting.) Whether it is good to precommit inherently depends on the long-term implications of your action, unless you want to have separate precommitments for quantum fluctuation trolleys and normal trolleys that people choose to walk in front of.
And of course it may turn out that your precommitment ends up making people worse off in this situation (more people die if you don’t switch the trolley), but that’s how precommitments work—having to follow through on the precommitment could leave things worse off without making the precommitment a bad idea.
That doesn’t work unless you can make separate precommitments for switches that nobody knows about and switches that people might know about. You probably are unable to do that, for the same reason that you are unable to have separate precommitments for quantum fluctuations and normal trolleys.
Also, that assumption is not enough. Similarly to the reasoning behind superrationality, people can figure out what your reasoning is whether or not you tell them. You’d have to assume that nobody knows what your ethical system is, plus one wide scale assumption such as assuming that nobody knows about the existence of utilitarians (or of deontologists whose rules are modelled as utilitarian precommitment.)
If you don’t know anything about them, there is some chance that deciding to pull the switch will change the incentives for people to feel free to step in front of trolleys.
Also, consider precommitting. You precommit to pull or not pull the switch based on whether pulling the switch overall saves more people, including the change in people’s actions formed by the existence of your precommitment. (You could even model some deontological rules as a form of precommitting.) Whether it is good to precommit inherently depends on the long-term implications of your action, unless you want to have separate precommitments for quantum fluctuation trolleys and normal trolleys that people choose to walk in front of.
And of course it may turn out that your precommitment ends up making people worse off in this situation (more people die if you don’t switch the trolley), but that’s how precommitments work—having to follow through on the precommitment could leave things worse off without making the precommitment a bad idea.
Don’t know if this is “least convenient world” or “most convenient world” territory, but I think it fits in the spirit of the problem:
No one will know that a switch was pulled except you.
That doesn’t work unless you can make separate precommitments for switches that nobody knows about and switches that people might know about. You probably are unable to do that, for the same reason that you are unable to have separate precommitments for quantum fluctuations and normal trolleys.
Also, that assumption is not enough. Similarly to the reasoning behind superrationality, people can figure out what your reasoning is whether or not you tell them. You’d have to assume that nobody knows what your ethical system is, plus one wide scale assumption such as assuming that nobody knows about the existence of utilitarians (or of deontologists whose rules are modelled as utilitarian precommitment.)