What do you think validates a standard of morality?
Nothing, pretty much. I think standards of morality cannot be validated.
That’s not a very helpful retort.
I don’t know if you think your position is defensible or it was just a throwaway line. It’s rather trivial to construct a bunch of moralities which will pass your validation criteria and look pretty awful at the same time.
It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don’t see how they can validate moral values.
Nothing, pretty much. I think standards of morality cannot be validated.
In a handful of discussions now, you’ve commented “X doesn’t do Y,” and then later followed up with “nothing can do Y,” which strikes me as logically rude compared to saying “X doesn’t do Y, which I see as a special case of nothing doing Y.” For example, in this comment, asking the question “what does it mean for a moral principle to be validated?” seems like the best way to clarify peter_hurford’s position.
I do think that standards of morality can be ‘validated,’ but what I mean by that is that standards of morality have practical effects if implemented, and one approach to metaethics is to choose a moral system by the desirability of its practical effects. I understood peter_hurford’s response here to be “I don’t think practical effects are the reason to follow any morality.”
This comment makes great sense inside of a morality, because moralities often operate by setting value systems. If one decides to adopt a value system which requires vegetarianism in order to signal that they are compassionate, that suggests their actual value system is the one which rewards signalling compassion. To use jargon, moralities want to be terminal goals, but in this metaethical system they are instrumental goals.
I don’t think this comment makes sense outside of a morality (i.e. I have a low opinion of the implied metaethics). If one is deciding whether to adopt morality A or morality B, knowing that A thinks B is immoral and B thinks A is immoral doesn’t help much (this is the content of the claim that a moral sphere restricted to humans is weird and arbitrary.) Knowing that morality A will lead to a certain kind of life and morality B will lead to a different kind of life seems more useful (although there’s still the question of how to choose between multiple kinds of lives!).
This leads to the position that even if you have the Absolutely Correct Morality handed to you by God, so long as that morality is furthered by more adherents it would be useful to think outside of that morality because standard persuasion advice is to emphasize the benefits the other party would receive from following your suggestion, rather than emphasizing the benefits you would receive if the other party follows your suggestions (“I get a referral bonus from the Almighty for every soul I save” is very different from “you’ll much prefer being in Heaven over being in Hell”). Instead of showing how your conclusion follows from your premises, it’s more effective to show how your conclusion is implied by their premises.
(I should point out that you can sort of see this happening by the use of “weird and arbitrary” as they don’t make sense as a logical claim but do make sense as a social claim. “All the cool kids are vegetarian these days” is an actual and strong reason to become vegetarian.)
Well, I didn’t mean to be rude but I’ll watch myself a bit more carefully for such tendencies. Talking to people over the ’net leads one to pick up some unfortunate habits :-)
For example, in this comment...
That one actually was a bona fide question. I didn’t think morality could be validated, but on the other hand I didn’t spend too much time thinking about the issue. So—maybe I was missing something, and this was a question with the meaning of “well, how could one go about it?” Maybe there was a way which didn’t occur to me.
one approach to metaethics is to choose a moral system by the desirability of its practical effects.
I am not a big fan of such an approach because I think that in this respect ethics is like philosophy—any attempts at meta very quickly become just another ethics or just another philosophy. And choosing on the basis of consequences is the same thing as expecting a system of ethics to be consistent (since you evaluate the desirability of consequences on the basis of some moral values). In other words I don’t think ethics can be usefully tiered—it’s a flat system.
Oh, and I think that moralities do not set value systems. Moralities are value systems. And they are terminal goals (or criteria, or metrics, or standards), they cannot be instrumental (again, because it’s a flat system).
“All the cool kids are vegetarian these days” is an actual and strong reason to become vegetarian.
I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don’t believe it should be.
I am not a big fan of such an approach because I think that in this respect ethics is like philosophy—any attempts at meta very quickly become just another ethics or just another philosophy.
Agreed that a given metaethical approach will cash out as a particular ethics in a particular situation. The reason I think it’s useful to go to metaethics is because you can then see the linkage between the situation and the prescription, which is useful for both insight and correcting flaws in an ethical system. I also think that while infinite regress problems are theoretically possible, for most humans there is a meaningful cliff suggesting it’s not worth it to go from meta-meta-ethics to meta-meta-meta-ethics, because to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.
I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don’t believe it should be.
It seems to me that there are a lot of obvious ways for morality derived without any sort of social help to go wrong, but we may be operating under different conceptions of ‘pressure.’
because you can then see the linkage between the situation and the prescription
Can you give me an example where metaethics is explicitly useful for that? I don’t see why in flat/collapsed ethics this should be a problem.
to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.
Ah. Interesting. To me ethics is practical application (that is, actions) of morality which is a system of values. Morality is normative. Psychology and economics for me are descriptive (with an important side-note that they describe not only what is, but also boundaries for what is possible/likely). Biology provides provides powerful external forces and boundaries which certainly shape and affect morality, but they are external—you have to accept them as a given.
there are a lot of obvious ways for morality derived without any sort of social help to go wrong
Of course, but so what? I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.
Can you give me an example where metaethics is explicitly useful for that? I don’t see why in flat/collapsed ethics this should be a problem.
Sure, but first I should try to be a little clearer: by ‘situation’ here I mean the incentives on the agent, not any particular dilemma. That is, I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics. As a side note, I think requiring this sort of functional upgrade when you move up a meta level makes the transition much more meaningful and makes infinite regress much less likely to happen practically.
I should also comment that I’ve been using ethics and morality interchangeably in this series of comments, even though I think it’s useful for the terms to be different along the lines you describe (of differentiating between value systems and action systems), mostly because I want to describe the system of picking value systems as meta-ethics instead of meta-morality.
It also seems worthwhile to remember that for most people, stated justifications follow decisions rather than decisions following stated justifications. This matches up with making decisions in near mode and justifying those decisions in far mode, which in the language I’m using here would look like far mode as ethics and near mode as meta-ethics.
An example would be vegetarianism. Vegetarianism in modern urban America, with a well-developed understanding of nutrition, is about as healthy as also eating animal products (possibly healthier, possibly less healthy, probably dependent on individual biology). Vegetarianism in undeveloped or rural areas is generally associated with malnutrition (often at subclinical levels, but that still has an effect on health and longevity). A metaethical system which recommends vegetarianism in America where it’s cheap and recommends against it in undeveloped areas where it’s expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.
(The operative phrase of that last sentence being the ‘balancing parameter’- if the stated justifications are driving the decisions, they need to be doing so quantitatively, and the parameters need to be inputs to the model, not outputs. It’s easy to say “this is the rule I want, find a parameter to implement that rule,” but difficult to say “this is the right parameter to use, and that results in this rule.”)
I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.
Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency. (For social influence to be a obvious net negative, I think you would need individual neurological diversity on a level far higher than we currently have, even though we do see some negative impacts at our current level of neurological diversity.)
I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics.
Ah. That makes a lot of sense.
for most people, stated justifications follow decisions rather than decisions following stated justifications.
True, but the key word here is “stated”.
making decisions in near mode and justifying those decisions in far mode, which in the language I’m using here would look like far mode as ethics and near mode as meta-ethics.
That doesn’t look right to me. For most people (those who justify post-factum) the majority of their ethics is submerged, below their consciousness level. That’s why “stated” is a very important qualifier. People necessarily make decisions based on their “real” ethics but bringing the real reasons to the surface might not be psychologically acceptable to them, so post-factum justifications come into play.
I don’t think people making decisions in near mode apply rules that “eat agent-world pairs and output ethics”. I think that for many people factors like “convenience”, “lookin’ good”, and “let’s discount far future to zero” play considerably larger role in their real ethics than they are willing to admit or even realize.
A metaethical system which recommends vegetarianism in America where it’s cheap and recommends against it in undeveloped areas where it’s expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.
I don’t see how this is so. Your meta-ethical system will still need to get that balancing parameter “just right” unless you start with the end result being known. Just because you divide the path from moral axioms to actions into several stages you don’t get to avoid sections of that path, you still need to walk it all.
Oh, and I don’t believe modern America has a “well-developed understanding of nutrition”, though it’s a separate discussion altogether.
Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency.
I don’t understand. What does specialization of labor has to do with morality?
And perhaps I should clarify my reaction. When I saw “All the cool kids are vegetarian these days” called “an actual and strong reason” to adopt this morality—well, my first thought was “All the cool kids root out hidden Jews / string up uppity Negroes / find and kill Tutsi / denounce the educated agents of imperialism / etc.” That must be an actual and strong reason to adopt this set of morals as well, right?
I don’t know how to figure out whether social influence is a net positive given that in practice social influence is always there and you can’t find a control group. My point is that accepting morality because many other people seem to follow it is a very dubious heuristic for me.
Agreed that it’s a stretch; “hidden ethics” and “stated ethics” is a much more natural divide for the two. I do think that “convenience” and “lookin’ good” depends on the agent-world pair, but I think the adaption is opaque and slow (i.e. learn it when you’re young over a long period) rather than explicit and fast.
I don’t see how this is so.
I was unclear there as well; I’m assuming that the “right” result is the one that maximizes the health and social standing of the implementer. Targeting that directly is easy; targeting it indirectly by using animal welfare is hard.
Oh, and I don’t believe modern America has a “well-developed understanding of nutrition”, though it’s a separate discussion altogether.
I was unclear; I meant that vegetarianism is safer for individuals with a well-developed understanding, not that urban America as a whole has a well-developed understanding.
I don’t understand. What does specialization of labor has to do with morality?
Many moral questions are hard to figure out, especially when they rely on second or third order effects. Think of the parable of the broken window, of journalistic, clerical, or medical ethics which promise non-intervention or secrecy. There is strong value in the communication of moral claims, which I’m not sure how to distinguish from social pressure (and think social pressure may be a necessary part of communicating those claims).
There is strong value in the communication of moral claims
It seems to me the issues of trust and credibility are dominant here. People get moral claims thrown at them constantly from different directions, many of them are incompatible or sometimes even direct opposites of each other. One needs some system of sorting them out, of evaluating them and deciding whether to accept them or not. Popularity is, of course, one such system but it has its problems, especially when moral claims come from those with power. There are obvious incentives in spreading moral memes advantageous to you.
I guess I see the social communication of moral claims to be strongly manipulated by those who stand to gain from it (which basically means those with power—political, commercial, religious, etc.) and so suspect.
Nothing, pretty much. I think standards of morality cannot be validated.
I think we agree there, then.
It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don’t see how they can validate moral values.
I was thinking of a different kind of “validation”.
I agree. You can state as a fact whether some action meets some standard of morality. That does nothing to validate a standard of morality, however.
Oh, boy. Social consensus, ease of use, really?
I’m not sure a standard of morality could ever be validated in the way you might like.
What do you think validates a standard of morality?
~
That’s not a very helpful retort.
Nothing, pretty much. I think standards of morality cannot be validated.
I don’t know if you think your position is defensible or it was just a throwaway line. It’s rather trivial to construct a bunch of moralities which will pass your validation criteria and look pretty awful at the same time.
It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don’t see how they can validate moral values.
In a handful of discussions now, you’ve commented “X doesn’t do Y,” and then later followed up with “nothing can do Y,” which strikes me as logically rude compared to saying “X doesn’t do Y, which I see as a special case of nothing doing Y.” For example, in this comment, asking the question “what does it mean for a moral principle to be validated?” seems like the best way to clarify peter_hurford’s position.
I do think that standards of morality can be ‘validated,’ but what I mean by that is that standards of morality have practical effects if implemented, and one approach to metaethics is to choose a moral system by the desirability of its practical effects. I understood peter_hurford’s response here to be “I don’t think practical effects are the reason to follow any morality.”
This comment makes great sense inside of a morality, because moralities often operate by setting value systems. If one decides to adopt a value system which requires vegetarianism in order to signal that they are compassionate, that suggests their actual value system is the one which rewards signalling compassion. To use jargon, moralities want to be terminal goals, but in this metaethical system they are instrumental goals.
I don’t think this comment makes sense outside of a morality (i.e. I have a low opinion of the implied metaethics). If one is deciding whether to adopt morality A or morality B, knowing that A thinks B is immoral and B thinks A is immoral doesn’t help much (this is the content of the claim that a moral sphere restricted to humans is weird and arbitrary.) Knowing that morality A will lead to a certain kind of life and morality B will lead to a different kind of life seems more useful (although there’s still the question of how to choose between multiple kinds of lives!).
This leads to the position that even if you have the Absolutely Correct Morality handed to you by God, so long as that morality is furthered by more adherents it would be useful to think outside of that morality because standard persuasion advice is to emphasize the benefits the other party would receive from following your suggestion, rather than emphasizing the benefits you would receive if the other party follows your suggestions (“I get a referral bonus from the Almighty for every soul I save” is very different from “you’ll much prefer being in Heaven over being in Hell”). Instead of showing how your conclusion follows from your premises, it’s more effective to show how your conclusion is implied by their premises.
(I should point out that you can sort of see this happening by the use of “weird and arbitrary” as they don’t make sense as a logical claim but do make sense as a social claim. “All the cool kids are vegetarian these days” is an actual and strong reason to become vegetarian.)
Well, I didn’t mean to be rude but I’ll watch myself a bit more carefully for such tendencies. Talking to people over the ’net leads one to pick up some unfortunate habits :-)
That one actually was a bona fide question. I didn’t think morality could be validated, but on the other hand I didn’t spend too much time thinking about the issue. So—maybe I was missing something, and this was a question with the meaning of “well, how could one go about it?” Maybe there was a way which didn’t occur to me.
I am not a big fan of such an approach because I think that in this respect ethics is like philosophy—any attempts at meta very quickly become just another ethics or just another philosophy. And choosing on the basis of consequences is the same thing as expecting a system of ethics to be consistent (since you evaluate the desirability of consequences on the basis of some moral values). In other words I don’t think ethics can be usefully tiered—it’s a flat system.
Oh, and I think that moralities do not set value systems. Moralities are value systems. And they are terminal goals (or criteria, or metrics, or standards), they cannot be instrumental (again, because it’s a flat system).
I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don’t believe it should be.
Agreed that a given metaethical approach will cash out as a particular ethics in a particular situation. The reason I think it’s useful to go to metaethics is because you can then see the linkage between the situation and the prescription, which is useful for both insight and correcting flaws in an ethical system. I also think that while infinite regress problems are theoretically possible, for most humans there is a meaningful cliff suggesting it’s not worth it to go from meta-meta-ethics to meta-meta-meta-ethics, because to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.
It seems to me that there are a lot of obvious ways for morality derived without any sort of social help to go wrong, but we may be operating under different conceptions of ‘pressure.’
Can you give me an example where metaethics is explicitly useful for that? I don’t see why in flat/collapsed ethics this should be a problem.
Ah. Interesting. To me ethics is practical application (that is, actions) of morality which is a system of values. Morality is normative. Psychology and economics for me are descriptive (with an important side-note that they describe not only what is, but also boundaries for what is possible/likely). Biology provides provides powerful external forces and boundaries which certainly shape and affect morality, but they are external—you have to accept them as a given.
Of course, but so what? I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.
Sure, but first I should try to be a little clearer: by ‘situation’ here I mean the incentives on the agent, not any particular dilemma. That is, I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics. As a side note, I think requiring this sort of functional upgrade when you move up a meta level makes the transition much more meaningful and makes infinite regress much less likely to happen practically.
I should also comment that I’ve been using ethics and morality interchangeably in this series of comments, even though I think it’s useful for the terms to be different along the lines you describe (of differentiating between value systems and action systems), mostly because I want to describe the system of picking value systems as meta-ethics instead of meta-morality.
It also seems worthwhile to remember that for most people, stated justifications follow decisions rather than decisions following stated justifications. This matches up with making decisions in near mode and justifying those decisions in far mode, which in the language I’m using here would look like far mode as ethics and near mode as meta-ethics.
An example would be vegetarianism. Vegetarianism in modern urban America, with a well-developed understanding of nutrition, is about as healthy as also eating animal products (possibly healthier, possibly less healthy, probably dependent on individual biology). Vegetarianism in undeveloped or rural areas is generally associated with malnutrition (often at subclinical levels, but that still has an effect on health and longevity). A metaethical system which recommends vegetarianism in America where it’s cheap and recommends against it in undeveloped areas where it’s expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.
(The operative phrase of that last sentence being the ‘balancing parameter’- if the stated justifications are driving the decisions, they need to be doing so quantitatively, and the parameters need to be inputs to the model, not outputs. It’s easy to say “this is the rule I want, find a parameter to implement that rule,” but difficult to say “this is the right parameter to use, and that results in this rule.”)
Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency. (For social influence to be a obvious net negative, I think you would need individual neurological diversity on a level far higher than we currently have, even though we do see some negative impacts at our current level of neurological diversity.)
Ah. That makes a lot of sense.
True, but the key word here is “stated”.
That doesn’t look right to me. For most people (those who justify post-factum) the majority of their ethics is submerged, below their consciousness level. That’s why “stated” is a very important qualifier. People necessarily make decisions based on their “real” ethics but bringing the real reasons to the surface might not be psychologically acceptable to them, so post-factum justifications come into play.
I don’t think people making decisions in near mode apply rules that “eat agent-world pairs and output ethics”. I think that for many people factors like “convenience”, “lookin’ good”, and “let’s discount far future to zero” play considerably larger role in their real ethics than they are willing to admit or even realize.
I don’t see how this is so. Your meta-ethical system will still need to get that balancing parameter “just right” unless you start with the end result being known. Just because you divide the path from moral axioms to actions into several stages you don’t get to avoid sections of that path, you still need to walk it all.
Oh, and I don’t believe modern America has a “well-developed understanding of nutrition”, though it’s a separate discussion altogether.
I don’t understand. What does specialization of labor has to do with morality?
And perhaps I should clarify my reaction. When I saw “All the cool kids are vegetarian these days” called “an actual and strong reason” to adopt this morality—well, my first thought was “All the cool kids root out hidden Jews / string up uppity Negroes / find and kill Tutsi / denounce the educated agents of imperialism / etc.” That must be an actual and strong reason to adopt this set of morals as well, right?
I don’t know how to figure out whether social influence is a net positive given that in practice social influence is always there and you can’t find a control group. My point is that accepting morality because many other people seem to follow it is a very dubious heuristic for me.
Agreed that it’s a stretch; “hidden ethics” and “stated ethics” is a much more natural divide for the two. I do think that “convenience” and “lookin’ good” depends on the agent-world pair, but I think the adaption is opaque and slow (i.e. learn it when you’re young over a long period) rather than explicit and fast.
I was unclear there as well; I’m assuming that the “right” result is the one that maximizes the health and social standing of the implementer. Targeting that directly is easy; targeting it indirectly by using animal welfare is hard.
I was unclear; I meant that vegetarianism is safer for individuals with a well-developed understanding, not that urban America as a whole has a well-developed understanding.
Many moral questions are hard to figure out, especially when they rely on second or third order effects. Think of the parable of the broken window, of journalistic, clerical, or medical ethics which promise non-intervention or secrecy. There is strong value in the communication of moral claims, which I’m not sure how to distinguish from social pressure (and think social pressure may be a necessary part of communicating those claims).
It seems to me the issues of trust and credibility are dominant here. People get moral claims thrown at them constantly from different directions, many of them are incompatible or sometimes even direct opposites of each other. One needs some system of sorting them out, of evaluating them and deciding whether to accept them or not. Popularity is, of course, one such system but it has its problems, especially when moral claims come from those with power. There are obvious incentives in spreading moral memes advantageous to you.
I guess I see the social communication of moral claims to be strongly manipulated by those who stand to gain from it (which basically means those with power—political, commercial, religious, etc.) and so suspect.
I think we agree there, then.
I was thinking of a different kind of “validation”.