Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody’s brain there must be a mechanism that decides which of the considered options wins the competition for “most moral option of the moment”.
That’s a common utilitarian assumption/axiom, but I’m not sure it’s true. I think for most people, analysis stops at “this action is not wrong,” and potential actions are not ranked much beyond that. Thus, most people would not say that one is behaving immorally by volunteering at a soup kitchen, even if volunteering for MSF in Africa might be a more effective means of increasing the utility of other people. Your scalar ordering might work a bit better for the related, but distinct, concept of “praiseworthiness”—but even there, I think people’s intuitions are much too rough-hewn to admit of a stable scalar ordering.
To conceptualize that for you in a slightly different sense: we probably have far fewer brain states than the set of all possible actions we could hypothetically take in any given situation (once those possible actions are described in enough detail). Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions—in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember—we are dealing here not with all possible brain states, but with all possible states of the portion of the brain which involves itself in ethical judgments.
Your (b): I view morality not as the set of rules postulated by creed X at time T but as the result of a genetically biased social learning process. Morality is expressed through it’s influence on every (healthy) individual’s personal utility function.
Intersting, but I think also incomplete. To see why: ask yourself whether it makes sense for someone to ask you, following G.E. Moore, the following question:
“Yes, I understand that X is a action that I am disposed to prefer/regard favorably/etc for reasons having to do with evolutionary imperatives. Nevertheless, is it right/proper/moral to do X?”
In other words, there may well be evolutionary imperatives that drive us to engage in infidelity, murder, and even rape. Does that make those actions necessarily moral? If not, your account fails to capture a significant amount of the meaning of moral language.
(8) ? [Sorry, I don’t understand this one.]
Some component of ethical language is probably intended to serve prescriptive functions in social interactions. Thus, in some cases, when we say that “X is immoral” or “X is wrong” to someone proposing to engage in X, part of what we mean is simply “Do not do X.” I put that one last because I think it is less important as a component of our understanding of ethical language—typically, I think people don’t actually mean (8), but rather, (8) is logically implied as a prudential corrolary of meanings 1-7.
To your voting scenario: I vote to torture the terrorist who proposes this choice to everyone. In other words, asking each one personally, “Would you rather be dust specked or have someone randomly tortured?” would be much like a terrorist demanding $1 per person (from the whole world), otherwise he will kill someone. In this case, of course, one would kill the terrorist.
So, the fact that an immoral person is forcing a choice upon you, means that there is no longer any moral significance to the choice? That makes no sense at all.
---
Unknown: Your example only has bite if you assume that moral preferences must be transitive across examples. I think you need to justify your argument that moral preferences must necessarily be immune to Dutch Books. I can see why it might be desireable for them to not be Dutch-Bookable; but not everything that is pleasant is true.
Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody’s brain there must be a mechanism that decides which of the considered options wins the competition for “most moral option of the moment”.
That’s a common utilitarian assumption/axiom, but I’m not sure it’s true. I think for most people, analysis stops at “this action is not wrong,” and potential actions are not ranked much beyond that. Thus, most people would not say that one is behaving immorally by volunteering at a soup kitchen, even if volunteering for MSF in Africa might be a more effective means of increasing the utility of other people. Your scalar ordering might work a bit better for the related, but distinct, concept of “praiseworthiness”—but even there, I think people’s intuitions are much too rough-hewn to admit of a stable scalar ordering.
To conceptualize that for you in a slightly different sense: we probably have far fewer brain states than the set of all possible actions we could hypothetically take in any given situation (once those possible actions are described in enough detail). Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions—in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember—we are dealing here not with all possible brain states, but with all possible states of the portion of the brain which involves itself in ethical judgments.
Your (b): I view morality not as the set of rules postulated by creed X at time T but as the result of a genetically biased social learning process. Morality is expressed through it’s influence on every (healthy) individual’s personal utility function.
Intersting, but I think also incomplete. To see why: ask yourself whether it makes sense for someone to ask you, following G.E. Moore, the following question:
“Yes, I understand that X is a action that I am disposed to prefer/regard favorably/etc for reasons having to do with evolutionary imperatives. Nevertheless, is it right/proper/moral to do X?”
In other words, there may well be evolutionary imperatives that drive us to engage in infidelity, murder, and even rape. Does that make those actions necessarily moral? If not, your account fails to capture a significant amount of the meaning of moral language.
(8) ? [Sorry, I don’t understand this one.]
Some component of ethical language is probably intended to serve prescriptive functions in social interactions. Thus, in some cases, when we say that “X is immoral” or “X is wrong” to someone proposing to engage in X, part of what we mean is simply “Do not do X.” I put that one last because I think it is less important as a component of our understanding of ethical language—typically, I think people don’t actually mean (8), but rather, (8) is logically implied as a prudential corrolary of meanings 1-7.
To your voting scenario: I vote to torture the terrorist who proposes this choice to everyone. In other words, asking each one personally, “Would you rather be dust specked or have someone randomly tortured?” would be much like a terrorist demanding $1 per person (from the whole world), otherwise he will kill someone. In this case, of course, one would kill the terrorist.
So, the fact that an immoral person is forcing a choice upon you, means that there is no longer any moral significance to the choice? That makes no sense at all.
---
Unknown: Your example only has bite if you assume that moral preferences must be transitive across examples. I think you need to justify your argument that moral preferences must necessarily be immune to Dutch Books. I can see why it might be desireable for them to not be Dutch-Bookable; but not everything that is pleasant is true.