It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
I don’t know who “we” are.
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.