That’s the case where both groups are on a track, not the case where I could push a safely-positioned non-tracker onto the track. And in that case I don’t generally object to changing which track the trolley is on anyway.
In any case, this aspect would again fundamentally change the problem, while still not changing the logic I gave above:
Nobody will know what you did, or even that you did anything, except you.
This (if applied to the fat man case I actually object to) is basically saying that I can rewrite physics to the point where even being on a bridge above a train does not protect you from being hit by it. Thus, everything I said before, about it becoming harder to assess and trade off against risk, would apply, and making the change would be inefficient for the same reasons. (i.e. I would prefer a world in which risks are easier to assess, not one in which you have to be miles from any dangerous thing just to be safe)
In the two-track setup, only one of the tracks is going to get killed, even if you do nothing. Switching the train to a previously-safe track with someone on it is morally identical to throwing someone safe onto a single track, IMO.
For clarity: from this post, I understood your objection to be primarily rooted in second-order effects. Your claim seems to be that you are not simply saving these people and killing those people by your actions, you are also destroying understanding of how the world works, wrecking incentive structures, and so on. If my understanding on this point is incorrect, please clarify.
Assuming the above is correct, my modification seems to deal with those objections cleanly. If you are the only one who knows what happened, then people aren’t going to get the information that some crazy bastard threw some dude at a trolley, they’re just going to go on continuing to assume that sort of thing only happens in debates between philosophy geeks. It is never known to have happened, therefore the second-order effects from people’s reactions to it having happened never come up, and you can look at the problem purely with regard to first-order effects alone.
Replacing “like that guy did a few months ago” in my comment with something agentless and Silas-free such as “like seems to happen these days” doesn’t, AFAICT, change the relevance of my objection: people are still less able to manage risk, and a Pareto disimprovement has happened in that people have to spend more to get the same-utility risk/reward combo. So your change does not obviate my distinction and objection.
But it has to be a real known problem in order for people’s actions to change. Given that a pure trolley problem hasn’t yet happened in reality, keeping it secret if it did happen should be plenty sufficient to prevent societal harm from the reactions.
But if I say that it’s a good idea here, I’m saying it’s a good idea in any comparable case, and so it should be a discernible (and Pareto-inefficient) phenomenon.
Again, the problem is not that people could notice me as being responsible. The problem is that it’s harder to assess dangers at all, so people have to increase their margins of safety all around. If someone wants to avoid death by errant trolleys, it’s no longer enough to be on a bridge overpass; they have to be way, way removed.
The question, in other words is, “would I prefer that causality were less constrained by locality?” No, I would not, regardless of whether I get the blame for it.
So your claim is that other people’s reasoning processes work not based on evidence derived by their senses, but instead by magic. An event which they have no possible way of knowing about has happened, and you still expect them to take it into account and change their decisions accordingly. Do I have that about right?
If this kind of thing consistently happened (as it would have to, if I claim it should be done in every comparable case), then yes it would be discernible, without magic.
If this action is really, truly intended as a “one-off” action, then sure, you avoid that consequence, but you also avoid talking about morality altogether, since you’ve relaxed the constraint that moral rules be consistent altogether.
No, your criticism of a particular morality is irrelevant if you stipulate that the principle behind its solution doesn’t generalize. That is, if you say, “what would you do here if we stipulated that the reasoning behind your decision didn’t generalize?” then you’ve discarded the requirement of consistency and the debate is pointless.
I think of it more as establishing boundary conditions. Obviously, you can’t use the trolley problem on its own as sufficient justification for Lenin’s policy of breaking a few eggs. But if the pure version of the problem leads you to the conclusion that it’s wrong to think about then you avoid the discussion entirely, whereas if it’s a proper approach in the pure problem then the next step is trying to figure out the real-world limits.
In this situation, you’re trying to claim that the (action in your favored) solution to the pure version of the problem requires such narrow conditions that I can safely assume it won’t imply any recognizable regularity to which people could adapt. My point is that, in that case:
1) You’re no longer talking about trolley-like problems at all (as in my earlier distinction between the “which side of road problem” and the “which side of road + bizarre terrorist” problem, and
2) Since there is no recognizable regularity to the solution, the situation does not even serve to illuminate a boundary.
I’m trying to say that the problem exists mostly to fix a boundary. If killing one to save five is not okay, even under the most benign possible circumstances, then that closes off large fields of argument that could possibly be had. If it is, then it limits people out of using especially absolutist arguments in situations like murder law.
(The other advantage is that it gets people thinking about exactly what they believe, which is generally a good thing)
Also, re your side-of-road problem, I could actually come up with an answer for you in a minimal setup—assuming a new island that’s building a road network, I’d probably go for driving on the right, because more cars are manufactured that way, and because most people are right-handed and the centre console has more and touchier controls on it than the door.
That’s the case where both groups are on a track, not the case where I could push a safely-positioned non-tracker onto the track. And in that case I don’t generally object to changing which track the trolley is on anyway.
In any case, this aspect would again fundamentally change the problem, while still not changing the logic I gave above:
This (if applied to the fat man case I actually object to) is basically saying that I can rewrite physics to the point where even being on a bridge above a train does not protect you from being hit by it. Thus, everything I said before, about it becoming harder to assess and trade off against risk, would apply, and making the change would be inefficient for the same reasons. (i.e. I would prefer a world in which risks are easier to assess, not one in which you have to be miles from any dangerous thing just to be safe)
In the two-track setup, only one of the tracks is going to get killed, even if you do nothing. Switching the train to a previously-safe track with someone on it is morally identical to throwing someone safe onto a single track, IMO.
That’s an interesting opinion to hold. Would you care to go over the reasons I’ve given to find them different?
For clarity: from this post, I understood your objection to be primarily rooted in second-order effects. Your claim seems to be that you are not simply saving these people and killing those people by your actions, you are also destroying understanding of how the world works, wrecking incentive structures, and so on. If my understanding on this point is incorrect, please clarify.
Assuming the above is correct, my modification seems to deal with those objections cleanly. If you are the only one who knows what happened, then people aren’t going to get the information that some crazy bastard threw some dude at a trolley, they’re just going to go on continuing to assume that sort of thing only happens in debates between philosophy geeks. It is never known to have happened, therefore the second-order effects from people’s reactions to it having happened never come up, and you can look at the problem purely with regard to first-order effects alone.
Replacing “like that guy did a few months ago” in my comment with something agentless and Silas-free such as “like seems to happen these days” doesn’t, AFAICT, change the relevance of my objection: people are still less able to manage risk, and a Pareto disimprovement has happened in that people have to spend more to get the same-utility risk/reward combo. So your change does not obviate my distinction and objection.
But it has to be a real known problem in order for people’s actions to change. Given that a pure trolley problem hasn’t yet happened in reality, keeping it secret if it did happen should be plenty sufficient to prevent societal harm from the reactions.
But if I say that it’s a good idea here, I’m saying it’s a good idea in any comparable case, and so it should be a discernible (and Pareto-inefficient) phenomenon.
But if you limit “comparable cases” to situations where you can do it in secret, that’s not a problem.
Again, the problem is not that people could notice me as being responsible. The problem is that it’s harder to assess dangers at all, so people have to increase their margins of safety all around. If someone wants to avoid death by errant trolleys, it’s no longer enough to be on a bridge overpass; they have to be way, way removed.
The question, in other words is, “would I prefer that causality were less constrained by locality?” No, I would not, regardless of whether I get the blame for it.
So your claim is that other people’s reasoning processes work not based on evidence derived by their senses, but instead by magic. An event which they have no possible way of knowing about has happened, and you still expect them to take it into account and change their decisions accordingly. Do I have that about right?
If this kind of thing consistently happened (as it would have to, if I claim it should be done in every comparable case), then yes it would be discernible, without magic.
If this action is really, truly intended as a “one-off” action, then sure, you avoid that consequence, but you also avoid talking about morality altogether, since you’ve relaxed the constraint that moral rules be consistent altogether.
So morality is irrelevant in sufficiently unlikely situations?
No, your criticism of a particular morality is irrelevant if you stipulate that the principle behind its solution doesn’t generalize. That is, if you say, “what would you do here if we stipulated that the reasoning behind your decision didn’t generalize?” then you’ve discarded the requirement of consistency and the debate is pointless.
I think of it more as establishing boundary conditions. Obviously, you can’t use the trolley problem on its own as sufficient justification for Lenin’s policy of breaking a few eggs. But if the pure version of the problem leads you to the conclusion that it’s wrong to think about then you avoid the discussion entirely, whereas if it’s a proper approach in the pure problem then the next step is trying to figure out the real-world limits.
In this situation, you’re trying to claim that the (action in your favored) solution to the pure version of the problem requires such narrow conditions that I can safely assume it won’t imply any recognizable regularity to which people could adapt. My point is that, in that case:
1) You’re no longer talking about trolley-like problems at all (as in my earlier distinction between the “which side of road problem” and the “which side of road + bizarre terrorist” problem, and
2) Since there is no recognizable regularity to the solution, the situation does not even serve to illuminate a boundary.
I’m trying to say that the problem exists mostly to fix a boundary. If killing one to save five is not okay, even under the most benign possible circumstances, then that closes off large fields of argument that could possibly be had. If it is, then it limits people out of using especially absolutist arguments in situations like murder law.
(The other advantage is that it gets people thinking about exactly what they believe, which is generally a good thing)
Also, re your side-of-road problem, I could actually come up with an answer for you in a minimal setup—assuming a new island that’s building a road network, I’d probably go for driving on the right, because more cars are manufactured that way, and because most people are right-handed and the centre console has more and touchier controls on it than the door.