I wonder if it’s better or worse to construct problems that are implausible from the very start, instead of being potentially realistic up to a certain point where you’re asked to suspend disbelief. (Similar to how we do decision problems here, with Omega being portrayed as a superintelligence from another galaxy who is nearly omniscient and whose sole goal appears to be giving people confusing decision problems. IIRC, conventional treatments of decision theory often portray the Predictor as a human and do not explain why his predictions tend to be accurate, only specifying that he has previously been right 99% or 100% of the time. I suspect that format tends to encourage people to make excuses not to answer the real question.) So, suppose instead of the traditional trolley problem, we say “An invincible demon appears before you with a hostage tied to a chair, and he gives you a gun. He tells you that you can shoot the hostage in the head or untie her and set her free, and that if and only if you set her free, he will go off and kill five other people at random. What do you do?” Does that make it better in worse, in terms of your ability to separate the implausibility of the situation from your ability to come to a moral judgment?
Your version adds an irrelevancy—the possible moral agency of the demon provides an out for the test subject: “It is not my fault those 5 people died; the demon did it.” It is much more difficult to shift moral responsibility to the trolley.
I’m not sure why it’s perceived as more difficult. The trolley didn’t just appear magically on the tracks. Someone put it there and set it moving (or negligently allowed it to).
Well, I perceive it as more difficult because of my intuitions about how culpability travels up a causal chain.
For example, if someone dies because of a bullet fired into their heart from a gun shot by a hand controlled by a brain B following an instruction given by agent A, my judgment of culpability travels unattenuated through the bullet and the gun and the hand. To what degree it grounds out in B and A depends on to what degree I consider B autonomous… if B is an advanced ballistics-targeting computer I might be willing to call it a brain but still unwilling to hold it culpable for the death, for example. Either way, the bulk of the culpability grounds out there. I may go further and look at the social structures and contingent history that led to A and B (and the hand, bullet, gun, heart, etc.) being the way they are, but that will at best be in addition to the initial judgment, and I probably won’t bother.
Similarly, if five people are hit by a trolley that rolled down a track that agent A chose not to stop, my intuitions of culpability ground out in A. Again, I may go further and look at the train switching systems and so on and so forth, but that will be in addition to the initial judgment, and I probably won’t bother.
I find it helpful to remember that intuitions about culpability are distinct from beliefs about responsibility.
I begin to suspect that it’s impossible to come up with a moral dilemma so implausibly simplified that nobody can possibly find a way to nitpick it. :P
(Though that one was just sloppiness on my part, I admit.)
I wonder if it’s better or worse to construct problems that are implausible from the very start, instead of being potentially realistic up to a certain point where you’re asked to suspend disbelief. (Similar to how we do decision problems here, with Omega being portrayed as a superintelligence from another galaxy who is nearly omniscient and whose sole goal appears to be giving people confusing decision problems. IIRC, conventional treatments of decision theory often portray the Predictor as a human and do not explain why his predictions tend to be accurate, only specifying that he has previously been right 99% or 100% of the time. I suspect that format tends to encourage people to make excuses not to answer the real question.) So, suppose instead of the traditional trolley problem, we say “An invincible demon appears before you with a hostage tied to a chair, and he gives you a gun. He tells you that you can shoot the hostage in the head or untie her and set her free, and that if and only if you set her free, he will go off and kill five other people at random. What do you do?” Does that make it better in worse, in terms of your ability to separate the implausibility of the situation from your ability to come to a moral judgment?
Your version adds an irrelevancy—the possible moral agency of the demon provides an out for the test subject: “It is not my fault those 5 people died; the demon did it.” It is much more difficult to shift moral responsibility to the trolley.
Good point, though that still tests whether a person thinks of “moral agency” as a relevant factor in deciding what to do.
I’m not sure why it’s perceived as more difficult. The trolley didn’t just appear magically on the tracks. Someone put it there and set it moving (or negligently allowed it to).
Well, I perceive it as more difficult because of my intuitions about how culpability travels up a causal chain.
For example, if someone dies because of a bullet fired into their heart from a gun shot by a hand controlled by a brain B following an instruction given by agent A, my judgment of culpability travels unattenuated through the bullet and the gun and the hand. To what degree it grounds out in B and A depends on to what degree I consider B autonomous… if B is an advanced ballistics-targeting computer I might be willing to call it a brain but still unwilling to hold it culpable for the death, for example. Either way, the bulk of the culpability grounds out there. I may go further and look at the social structures and contingent history that led to A and B (and the hand, bullet, gun, heart, etc.) being the way they are, but that will at best be in addition to the initial judgment, and I probably won’t bother.
Similarly, if five people are hit by a trolley that rolled down a track that agent A chose not to stop, my intuitions of culpability ground out in A. Again, I may go further and look at the train switching systems and so on and so forth, but that will be in addition to the initial judgment, and I probably won’t bother.
I find it helpful to remember that intuitions about culpability are distinct from beliefs about responsibility.
Nitpick: why can’t I leave the hostage tied to the chair without shooting her?
I begin to suspect that it’s impossible to come up with a moral dilemma so implausibly simplified that nobody can possibly find a way to nitpick it. :P
(Though that one was just sloppiness on my part, I admit.)
Or untie her and then shoot her? ;)
Well, if you were a superintelligence from another galaxy who was nearly omniscient, what would YOU do with it?
Make paperclips, of course.
Fondly regard creation.