What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
I don’t know anything about downvotes, but I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences. But it takes a conceptual shift.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences...Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that ‘morality’ is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: “Dissolving the confusion”.
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
I’m tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn’t see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, “It is morally wrong for Alice to lie to Bob.” How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice’s, Bob’s, mine, everybody else’s? For comparison purposes, also analyze the claim “It is morally wrong for Bob to strangle Alice.”
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
“It is morally wrong for Bob to strangle Alice.”
If you say, “I don’t want you to strangle Alice.”, Bob might reply, “I don’t care what you want!”.
If you say, “Strangling Alice might have detrimental effects on your other preferences.”, Bob might reply, “I assign infinite utility to the death of Alice!” (which might very well be the case for humans in a temporary rage).
But if you say, “It is morally wrong to strangle Alice.”, Bob might get confused and reply, “You are right, I don’t want to be immoral!”. Which is really a form of coercive persuasion. Since when you say, “It is morally wrong to strangle Alice.”, you actually signal, “If you strangle Alice you will feel guilty.”. It is a manipulative method that might make Bob say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”.
Primates don’t like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That’s also why some people don’t say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn’t be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
But rationality shouldn’t be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself.
However, these claims are false, so you have to make a different argument.
I’ve seen this sort of substitution-argument a few times recently, so I’ll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: “You should not enter that tunnel because your truck is taller than the ceiling’s clearance” is a good argument only if the truck in question is actually taller than the ceiling’s clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
These substitution arguments are quite a shortcut. The perpetrator doesn’t actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn’t make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn’t make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Cool, glad I’m not just imagining things! I think that sometimes this sort of argument can be valuable (“That person also has a subjective experience of divine inspiration, but came to a different conclusion”, frex), but I’ve become more suspicious of them recently—especially when I’m tempted to use one myself.
if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments!
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
True! This observation does not absolve us of our eternal vigilance.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I’m not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I’m saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sometimes you’ll want to explain why your punishment of others is justified. If you don’t want to engage Perplexed’s “moral realism”, then either you don’t think there’s anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don’t think it’s a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it’s still worth talking about.
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
Upvoted because I strongly agree with the spirit of this post, but I don’t think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter,
Because it harms other people directly or indirectly. Most immoral actions have that property.
to whom would it matter
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
and why would I care if I am happy and my preferences are satisfied?
Because you will probably be punished, and that tends to not satisfy your preferences.
Is it some sort of game that I am losing, where those who are the most right win?
If the moral code is correctly designed, yes.
What if I don’t want to play that game, what if I don’t care who wins?
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Because it harms other people directly or indirectly. Most immoral actions have that property.
Begging the question.
To the person you harm. To the victim’s friends and relatives.
Either that is part of my preferences or it isn’t.
To everyone in the society which is kept smoothly running by the moral code which you flout.
Either society is instrumental to my goals or it isn’t.
Because you will probably be punished, and that tends to not satisfy your preferences.
Game theory? Instrumental rationality? Cultural anthropology?
If the moral code is correctly designed, yes.
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
Then you are, by definition, irrational...
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
This community doesn’t talk about theology either[...]Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
It’s irrational to think that the evaluative buck stops with your own preferences.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
… what the optimal strategy is will change if the net values across the group changes.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Yes! That’s a point that I’ve repeated so often to so many different people [not on LW, though] that I’d more-or-less “given up”—it began to seem as futile as swatting flies in summer. Maybe I’ll resume swatting now I know I’m not alone.
Don’t think of morality as a doctrine guiding you as to how to behave.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it’s a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn’t.
Avoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
it makes sense to judge my own actions as good or bad, right or wrong. I can change them.
Yes, but if you judge a particular action of your own to be ‘wrong’, then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object “But that is not morality!”, ask yourself how you learned the difference between right and wrong.
ask yourself how you learned the difference between right and wrong.
It’s a valid point that I probably learned morality this way. I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren’t covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a “better” alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don’t do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a “golden rule” that if I don’t want something to happen to me, I won’t do it to someone else even if the action is socially acceptable amidst my friends and wouldn’t be punished.
I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment.
Ah. Thanks for bringing up the Kohlberg stages—I hadn’t been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not ‘that which receives reward and punishment’, it is instead ‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a “normal” level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it’s the law and it’s wrong to break the law, implicitly because they would be punished if they did.) I’m not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
How many people are stuck at that level? I don’t know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the ‘criminal’) by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a “Thank you” provide a powerful carrot to many people. How many? Again, I don’t know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
I don’t know anything about downvotes, but I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences. But it takes a conceptual shift.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that ‘morality’ is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: “Dissolving the confusion”.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
I’m tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn’t see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, “It is morally wrong for Alice to lie to Bob.” How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice’s, Bob’s, mine, everybody else’s? For comparison purposes, also analyze the claim “It is morally wrong for Bob to strangle Alice.”
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
If you say, “I don’t want you to strangle Alice.”, Bob might reply, “I don’t care what you want!”.
If you say, “Strangling Alice might have detrimental effects on your other preferences.”, Bob might reply, “I assign infinite utility to the death of Alice!” (which might very well be the case for humans in a temporary rage).
But if you say, “It is morally wrong to strangle Alice.”, Bob might get confused and reply, “You are right, I don’t want to be immoral!”. Which is really a form of coercive persuasion. Since when you say, “It is morally wrong to strangle Alice.”, you actually signal, “If you strangle Alice you will feel guilty.”. It is a manipulative method that might make Bob say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”.
Primates don’t like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That’s also why some people don’t say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn’t be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
But rationality shouldn’t be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself.
However, these claims are false, so you have to make a different argument.
I’ve seen this sort of substitution-argument a few times recently, so I’ll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: “You should not enter that tunnel because your truck is taller than the ceiling’s clearance” is a good argument only if the truck in question is actually taller than the ceiling’s clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
These substitution arguments are quite a shortcut. The perpetrator doesn’t actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn’t make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn’t make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Cool, glad I’m not just imagining things! I think that sometimes this sort of argument can be valuable (“That person also has a subjective experience of divine inspiration, but came to a different conclusion”, frex), but I’ve become more suspicious of them recently—especially when I’m tempted to use one myself.
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
True! This observation does not absolve us of our eternal vigilance.
Emphatically agreed.
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I’m not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I’m saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Perplexed wrote earlier:
Sometimes you’ll want to explain why your punishment of others is justified. If you don’t want to engage Perplexed’s “moral realism”, then either you don’t think there’s anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don’t think it’s a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it’s still worth talking about.
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
Upvoted because I strongly agree with the spirit of this post, but I don’t think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
Like if an atheist is talking to a religious person then the subject matter is metatheology?
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
Because it harms other people directly or indirectly. Most immoral actions have that property.
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
Because you will probably be punished, and that tends to not satisfy your preferences.
If the moral code is correctly designed, yes.
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Begging the question.
Either that is part of my preferences or it isn’t.
Either society is instrumental to my goals or it isn’t.
Game theory? Instrumental rationality? Cultural anthropology?
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
Also, what did you mean by
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
Can all of it be described in those terms? Isn’t that a philosophical claim?
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
It’s irrational to think that the evaluative buck stops with your own preferences.
Maybe he doesn’t care about the “evaluative buck”, which while rather unfortunate, is certainly possible.
If he doesn’t care about rationality, he is still being irrational,
This.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
I think we may have reached the somewhat common on LW point where we’re arguing even though we have no disagreement.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Yes! That’s a point that I’ve repeated so often to so many different people [not on LW, though] that I’d more-or-less “given up”—it began to seem as futile as swatting flies in summer. Maybe I’ll resume swatting now I know I’m not alone.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it’s a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn’t.
Avoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
Yes, but if you judge a particular action of your own to be ‘wrong’, then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object “But that is not morality!”, ask yourself how you learned the difference between right and wrong.
It’s a valid point that I probably learned morality this way. I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren’t covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a “better” alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don’t do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a “golden rule” that if I don’t want something to happen to me, I won’t do it to someone else even if the action is socially acceptable amidst my friends and wouldn’t be punished.
Ah. Thanks for bringing up the Kohlberg stages—I hadn’t been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not ‘that which receives reward and punishment’, it is instead ‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a “normal” level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it’s the law and it’s wrong to break the law, implicitly because they would be punished if they did.) I’m not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
How many people are stuck at that level? I don’t know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the ‘criminal’) by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a “Thank you” provide a powerful carrot to many people. How many? Again, I don’t know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
Cool! Swat away. Though I’m not particularly happy with the metaphor.