That’s missing the point of the dilemma. You can assume that they’re not workers and that they didn’t consent to any risks.
Like JGW said: workers or not, they assumed the risks inherent in being on top of a trolley track. The dude on the bridge didn’t. By choosing to be on top of a track, you are choosing to take the risks. It doesn’t mean (as you seem to be reading it) that you consent to dying. It means you chose a scenario with risks like errant trolleys.
This problem isn’t about assumption of risk, it’s about how people perceive their actions as directly causing death, or not
Why do people talk like this? It’s a bright red flag to me that, to put it politely, the discussion won’t be productive.
Attention everyone: you don’t get to decide what a problem is “about”. You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be “about” topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can’t come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
You can certainly argue that people make their judgments about the scenario because of a golly-how-stupid cognitive bias, but you sure as heck don’t get to say, “this problem is ‘about’ how people perceive their actions’ causation, all other arguments are automatically invalid”.
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
Then it’s wildly and substantively different from moral decisions people actually make, and are wired to be prepared for making. A world in which you can divert information flows like that differs in many ways that are hard to immediately appreciate.
It is certainly possible that there is some underlying utilitarian rationale being used.
The reasoning I gave wasn’t necessarily utilitarian—it also invokes deontological “you should adhere to existing social norms about pushing people off trolleys”. My point was that it still makes utilitarian sense.
Attention everyone: you don’t get to decide what a problem is “about”. You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be “about” topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can’t come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
No. If you know what point someone was trying to make, and you know how to change the scenario so your reason why it doesn’t count no longer applies, then you should assume The Least Convenient Possible world for all the reasons given in that post.
True, and people should certainly try that, but sometimes the proponent of the dilemma is so confused that switching to the LCPW is ill-defined or intractable, since it’s extremely difficult to remove one part while preserving “the sense of” the dilemma.
Like JGW said: workers or not, they assumed the risks inherent in being on top of a trolley track. The dude on the bridge didn’t. By choosing to be on top of a track, you are choosing to take the risks. It doesn’t mean (as you seem to be reading it) that you consent to dying. It means you chose a scenario with risks like errant trolleys.
Why do people talk like this? It’s a bright red flag to me that, to put it politely, the discussion won’t be productive.
Attention everyone: you don’t get to decide what a problem is “about”. You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be “about” topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can’t come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
You can certainly argue that people make their judgments about the scenario because of a golly-how-stupid cognitive bias, but you sure as heck don’t get to say, “this problem is ‘about’ how people perceive their actions’ causation, all other arguments are automatically invalid”.
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
Then it’s wildly and substantively different from moral decisions people actually make, and are wired to be prepared for making. A world in which you can divert information flows like that differs in many ways that are hard to immediately appreciate.
The reasoning I gave wasn’t necessarily utilitarian—it also invokes deontological “you should adhere to existing social norms about pushing people off trolleys”. My point was that it still makes utilitarian sense.
No. If you know what point someone was trying to make, and you know how to change the scenario so your reason why it doesn’t count no longer applies, then you should assume The Least Convenient Possible world for all the reasons given in that post.
True, and people should certainly try that, but sometimes the proponent of the dilemma is so confused that switching to the LCPW is ill-defined or intractable, since it’s extremely difficult to remove one part while preserving “the sense of” the dilemma.
That’s what I think was going on here.
Fair enough. You just stated it a little more strongly than is defensible.