Whether an agent is moral and whether an action is moral are fundamentally different questions, operating on different types.
They’re not as different as the majority view makes them out to be. A moral agent is one that uses decision processes that systematically produce moral actions. Period. Whereas the majority view is that a moral agent is not one whose decision processes are structured to produce moral actions, but one who has a virtuous free will. A rational extension of this view would be to say that someone who has a decision process that consistently produces immoral actions can still be moral if their free will is very strong and very virtuous, and manages to counterbalance their decision process.
The example above about a mind control ray has to do with changing the locus of intentionality controlling a person. It doesn’t have to do with the philosophical problem of free will. Does Dr. Evil have free will? It doesn’t matter, for the purposes of determining whether his cognitive processes consistently produce immoral actions.
A moral agent is one that uses decision processes that systematically produce moral actions. Period.
It’s more complicated than that, because agent-morality is a scale, not a boolean, and how morally a person acts depends on the circumstances they’re placed in. So a judgment of how moral someone is must have some predictive aspect.
Suppose you have agents X and Y, and scenarios A and B. X will do good in scenario A but will do evil in scenario B, while Y will do the opposite. Now if I tell you that scenario A will happen, then you should conclude that X is a better person than Y; but if I instead tell you that scenario B will happen, then you should conclude that Y is a better person than X.
The example above about a mind control ray has to do with changing the locus of intentionality controlling a person.
I don’t think “locus of intentionality” is the right way to think about this (except perhaps as a simplified model that reduces to conditioning on circumstances). In a society where mind control rays were common, but some people were immune, we would say that people who are immune are more moral than people who aren’t. In the society we actually have, we say that those who refuse in the Milgram experiment are more moral, and that people who refuse to do evil under the threat of force are more moral, and I don’t think a “locus of intentionality” model handles these cases cleanly.
They’re not as different as the majority view makes them out to be. A moral agent is one that uses decision processes that systematically produce moral actions. Period. Whereas the majority view is that a moral agent is not one whose decision processes are structured to produce moral actions, but one who has a virtuous free will. A rational extension of this view would be to say that someone who has a decision process that consistently produces immoral actions can still be moral if their free will is very strong and very virtuous, and manages to counterbalance their decision process.
The example above about a mind control ray has to do with changing the locus of intentionality controlling a person. It doesn’t have to do with the philosophical problem of free will. Does Dr. Evil have free will? It doesn’t matter, for the purposes of determining whether his cognitive processes consistently produce immoral actions.
It’s more complicated than that, because agent-morality is a scale, not a boolean, and how morally a person acts depends on the circumstances they’re placed in. So a judgment of how moral someone is must have some predictive aspect.
Suppose you have agents X and Y, and scenarios A and B. X will do good in scenario A but will do evil in scenario B, while Y will do the opposite. Now if I tell you that scenario A will happen, then you should conclude that X is a better person than Y; but if I instead tell you that scenario B will happen, then you should conclude that Y is a better person than X.
I don’t think “locus of intentionality” is the right way to think about this (except perhaps as a simplified model that reduces to conditioning on circumstances). In a society where mind control rays were common, but some people were immune, we would say that people who are immune are more moral than people who aren’t. In the society we actually have, we say that those who refuse in the Milgram experiment are more moral, and that people who refuse to do evil under the threat of force are more moral, and I don’t think a “locus of intentionality” model handles these cases cleanly.