Okay, this is getting annoying. I’ve mostly ignored “near vs. far” topics because I don’t know what the metaphorical meaning of the two is. Then, when I went to the LW wiki to be enlightened so I can understand these topics, what do I get?
NEAR: All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits.
FAR: Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits.
So … one of them goes with a bunch terms that have some vague relationship to each other, and the other one … um, does the same with different terms. Was that supposed to somehow be helpful?
Anyway, I don’t know how to translate this into near and far, but here’s my answer to the trolley problem:
Workers on the track consented to the risks associated with being on a trolley track, such as errant trolleys. (This does NOT mean they deserved to die, of course.) Someone standing above the track on a bridge only consented to the risks associated with being on a bridge above a trolley trolley track, NOT to the risk that someone would draft him for sacrificial lamb duty on a moment’s notice.
By intervening to push someone onto the track, you suddenly and unpredictably shift around the causal structure associated with danger in the world, on top of saving a few lives. Now, people have to worry about more heros drafting sacrificial lambs “like that one guy did a few months ago” and have to go to greater lengths to get the same level of risk.
In other words, all the “prediction difficulty” costs associated with randomly changing the “rules of the game” apply. Just as it’s costly to make people keep updating their knowledge of what’s okay and what isn’t, it’s costly to make people update their knowledge of what’s risky and what isn’t (and to less efficient regimes, no less).
That is what differentiates pushing a fat guy off, from diverting one track to another. I don’t pretend that that is what most people are thinking when they encounter the problem, but the “unusualness” of pushing someone off a bridge is certainly affecting their intuition, and so concerns about stability probably play a role. And of course, you have to factor in the fact that most people are responding on the fly, while the creator of the dilemma had all the time in the world to trip up people’s intuitions.
This is not to say there aren’t real moral dilemmas with the intended tradeoff. It’s just that, like with the Prisoner’s Dilemma, you need a more convoluted scenario to get the payoff matrix to work out as intended, at which point the situation is a lot less intuitive.
Workers on the track consented to the risks associated with being on a trolley track, such as errant trolleys. (This does NOT mean they deserved to die, of course.) Someone standing above the track on a bridge only consented to the risks associated with being on a bridge above a trolley trolley track, NOT to the risk that someone would draft him for sacrificial lamb duty on a moment’s notice.
That’s missing the point of the dilemma. You can assume that they’re not workers and that they didn’t consent to any risks. This problem isn’t about assumption of risk, it’s about how people perceive their actions as directly causing death, or not.
That’s missing the point of the dilemma. You can assume that they’re not workers and that they didn’t consent to any risks.
Like JGW said: workers or not, they assumed the risks inherent in being on top of a trolley track. The dude on the bridge didn’t. By choosing to be on top of a track, you are choosing to take the risks. It doesn’t mean (as you seem to be reading it) that you consent to dying. It means you chose a scenario with risks like errant trolleys.
This problem isn’t about assumption of risk, it’s about how people perceive their actions as directly causing death, or not
Why do people talk like this? It’s a bright red flag to me that, to put it politely, the discussion won’t be productive.
Attention everyone: you don’t get to decide what a problem is “about”. You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be “about” topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can’t come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
You can certainly argue that people make their judgments about the scenario because of a golly-how-stupid cognitive bias, but you sure as heck don’t get to say, “this problem is ‘about’ how people perceive their actions’ causation, all other arguments are automatically invalid”.
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
Then it’s wildly and substantively different from moral decisions people actually make, and are wired to be prepared for making. A world in which you can divert information flows like that differs in many ways that are hard to immediately appreciate.
It is certainly possible that there is some underlying utilitarian rationale being used.
The reasoning I gave wasn’t necessarily utilitarian—it also invokes deontological “you should adhere to existing social norms about pushing people off trolleys”. My point was that it still makes utilitarian sense.
Attention everyone: you don’t get to decide what a problem is “about”. You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be “about” topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can’t come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
No. If you know what point someone was trying to make, and you know how to change the scenario so your reason why it doesn’t count no longer applies, then you should assume The Least Convenient Possible world for all the reasons given in that post.
True, and people should certainly try that, but sometimes the proponent of the dilemma is so confused that switching to the LCPW is ill-defined or intractable, since it’s extremely difficult to remove one part while preserving “the sense of” the dilemma.
This is not to say there aren’t real moral dilemmas with the intended tradeoff. It’s just that, like with the Prisoner’s Dilemma, you need a more convoluted scenario to get the payoff matrix to work out as intended, at which point the situation is a lot less intuitive.
Silas is saying that the Least Convenient World to illustrate this point requires lots of caveats, and is not as simple as the scenario presented.
You can assume that they’re not workers and that they didn’t consent to any risks.
This is still not inconvenient enough. They are still responsible for being on the track, whether by ignorance or acceptance of the risks.
Okay, but that would be a fundamentally different problem, with different moral intuitions applying. The question becomes, “should five kidnapped people die, or one fat kidnapped person die?”
NB: “The trolley problem” does not uniquely describe a problem. While it does refer to Foot’s version from 1978, it also refers to any of the class of “trolley problems”, hundreds of which have been in published papers since then.
Much like “Gettier case” does not uniquely identify one thought experiment.
Okay, that’s actually the first time I’d seen the Trolley problem involve a “mad philosopher” (or equivalent concept) having tied them to the track, and that includes my previous visits to the Wikipedia article!
And even the later expositions in the article involving a fat man don’t mention people being kidnapped.
Well, I didn’t edit the article! I think you’re right about the assumption of risk version.
I do prefer the “mad philosopher” versions, because they make the apparently contradictory preferences very clear. That way, you’re weighing 5x against x. Most people have an intuition that it would be wrong to push the fat man, yet right to change the course of the trolley, which seems strange.
I would still think that it would be bad for people to have to worry about being drafted as sacrificial lambs because other people could not avoid being kidnapped by crazed philosophers.
One of the implications of the crazed-philosopher setup, though, is that there are well-enforced laws against tying people to railroad tracks, so that should be a rare occurrence, not something that people should have to take into consideration in their day to day lives. (So should ‘workers on a section of track that’s not protected from trains’, actually—OSHA would have something to say about it, I’m sure. I still prefer the crazed philosophers, though. They’re funny.) You do have a point, but that’s an issue that we as a society have already resolved in many cases.
I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest—in this case, the decision to push or not, or to switch tracks—and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.
I’d say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.
I don’t pretend that that is what most people are thinking when they encounter the problem, but the “unusualness” of pushing someone off a bridge is certainly affecting their intuition, and so concerns about stability probably play a role.
I don’t know, lot of people talk about how he’s “not involved” or “innocent” or how you should involve people who aren’t already part of the problem—it’s the same as the one with the guy with healthy organs and the dying transplant patients.
Okay, this is getting annoying. I’ve mostly ignored “near vs. far” topics because I don’t know what the metaphorical meaning of the two is. Then, when I went to the LW wiki to be enlightened so I can understand these topics, what do I get?
So … one of them goes with a bunch terms that have some vague relationship to each other, and the other one … um, does the same with different terms. Was that supposed to somehow be helpful?
Anyway, I don’t know how to translate this into near and far, but here’s my answer to the trolley problem:
Workers on the track consented to the risks associated with being on a trolley track, such as errant trolleys. (This does NOT mean they deserved to die, of course.) Someone standing above the track on a bridge only consented to the risks associated with being on a bridge above a trolley trolley track, NOT to the risk that someone would draft him for sacrificial lamb duty on a moment’s notice.
By intervening to push someone onto the track, you suddenly and unpredictably shift around the causal structure associated with danger in the world, on top of saving a few lives. Now, people have to worry about more heros drafting sacrificial lambs “like that one guy did a few months ago” and have to go to greater lengths to get the same level of risk.
In other words, all the “prediction difficulty” costs associated with randomly changing the “rules of the game” apply. Just as it’s costly to make people keep updating their knowledge of what’s okay and what isn’t, it’s costly to make people update their knowledge of what’s risky and what isn’t (and to less efficient regimes, no less).
That is what differentiates pushing a fat guy off, from diverting one track to another. I don’t pretend that that is what most people are thinking when they encounter the problem, but the “unusualness” of pushing someone off a bridge is certainly affecting their intuition, and so concerns about stability probably play a role. And of course, you have to factor in the fact that most people are responding on the fly, while the creator of the dilemma had all the time in the world to trip up people’s intuitions.
This is not to say there aren’t real moral dilemmas with the intended tradeoff. It’s just that, like with the Prisoner’s Dilemma, you need a more convoluted scenario to get the payoff matrix to work out as intended, at which point the situation is a lot less intuitive.
That’s missing the point of the dilemma. You can assume that they’re not workers and that they didn’t consent to any risks. This problem isn’t about assumption of risk, it’s about how people perceive their actions as directly causing death, or not.
Like JGW said: workers or not, they assumed the risks inherent in being on top of a trolley track. The dude on the bridge didn’t. By choosing to be on top of a track, you are choosing to take the risks. It doesn’t mean (as you seem to be reading it) that you consent to dying. It means you chose a scenario with risks like errant trolleys.
Why do people talk like this? It’s a bright red flag to me that, to put it politely, the discussion won’t be productive.
Attention everyone: you don’t get to decide what a problem is “about”. You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be “about” topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can’t come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
You can certainly argue that people make their judgments about the scenario because of a golly-how-stupid cognitive bias, but you sure as heck don’t get to say, “this problem is ‘about’ how people perceive their actions’ causation, all other arguments are automatically invalid”.
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
Then it’s wildly and substantively different from moral decisions people actually make, and are wired to be prepared for making. A world in which you can divert information flows like that differs in many ways that are hard to immediately appreciate.
The reasoning I gave wasn’t necessarily utilitarian—it also invokes deontological “you should adhere to existing social norms about pushing people off trolleys”. My point was that it still makes utilitarian sense.
No. If you know what point someone was trying to make, and you know how to change the scenario so your reason why it doesn’t count no longer applies, then you should assume The Least Convenient Possible world for all the reasons given in that post.
True, and people should certainly try that, but sometimes the proponent of the dilemma is so confused that switching to the LCPW is ill-defined or intractable, since it’s extremely difficult to remove one part while preserving “the sense of” the dilemma.
That’s what I think was going on here.
Fair enough. You just stated it a little more strongly than is defensible.
I think you missed this part:
Silas is saying that the Least Convenient World to illustrate this point requires lots of caveats, and is not as simple as the scenario presented.
This is still not inconvenient enough. They are still responsible for being on the track, whether by ignorance or acceptance of the risks.
I usually assume that they were kidnapped by crazed philosophers and tied to the tracks specifically for the purpose of the demonstration.
Okay, but that would be a fundamentally different problem, with different moral intuitions applying. The question becomes, “should five kidnapped people die, or one fat kidnapped person die?”
Silas, you’re right: the problem was poorly stated in the lecture referenced by the original post. The trolley car problem is in fact usually written to make it clear that the five people did not assume any risk. The original intent of this kind of problem was to explore intuitions dealing with utilitarianism.
NB: “The trolley problem” does not uniquely describe a problem. While it does refer to Foot’s version from 1978, it also refers to any of the class of “trolley problems”, hundreds of which have been in published papers since then.
Much like “Gettier case” does not uniquely identify one thought experiment.
Okay, that’s actually the first time I’d seen the Trolley problem involve a “mad philosopher” (or equivalent concept) having tied them to the track, and that includes my previous visits to the Wikipedia article!
And even the later expositions in the article involving a fat man don’t mention people being kidnapped.
Well, I didn’t edit the article! I think you’re right about the assumption of risk version.
I do prefer the “mad philosopher” versions, because they make the apparently contradictory preferences very clear. That way, you’re weighing 5x against x. Most people have an intuition that it would be wrong to push the fat man, yet right to change the course of the trolley, which seems strange.
I was curious, so I went to look. The ‘mad philosopher’ phrase was added in April, by an unnamed contributor. [Link]
I would still think that it would be bad for people to have to worry about being drafted as sacrificial lambs because other people could not avoid being kidnapped by crazed philosophers.
One of the implications of the crazed-philosopher setup, though, is that there are well-enforced laws against tying people to railroad tracks, so that should be a rare occurrence, not something that people should have to take into consideration in their day to day lives. (So should ‘workers on a section of track that’s not protected from trains’, actually—OSHA would have something to say about it, I’m sure. I still prefer the crazed philosophers, though. They’re funny.) You do have a point, but that’s an issue that we as a society have already resolved in many cases.
I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest—in this case, the decision to push or not, or to switch tracks—and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.
I’d say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.
I don’t know, lot of people talk about how he’s “not involved” or “innocent” or how you should involve people who aren’t already part of the problem—it’s the same as the one with the guy with healthy organs and the dying transplant patients.