I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one,
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
I don’t know who “we” are.
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.