imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter,
Because it harms other people directly or indirectly. Most immoral actions have that property.
to whom would it matter
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
and why would I care if I am happy and my preferences are satisfied?
Because you will probably be punished, and that tends to not satisfy your preferences.
Is it some sort of game that I am losing, where those who are the most right win?
If the moral code is correctly designed, yes.
What if I don’t want to play that game, what if I don’t care who wins?
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Because it harms other people directly or indirectly. Most immoral actions have that property.
Begging the question.
To the person you harm. To the victim’s friends and relatives.
Either that is part of my preferences or it isn’t.
To everyone in the society which is kept smoothly running by the moral code which you flout.
Either society is instrumental to my goals or it isn’t.
Because you will probably be punished, and that tends to not satisfy your preferences.
Game theory? Instrumental rationality? Cultural anthropology?
If the moral code is correctly designed, yes.
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
Then you are, by definition, irrational...
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
This community doesn’t talk about theology either[...]Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
It’s irrational to think that the evaluative buck stops with your own preferences.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
… what the optimal strategy is will change if the net values across the group changes.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
Because it harms other people directly or indirectly. Most immoral actions have that property.
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
Because you will probably be punished, and that tends to not satisfy your preferences.
If the moral code is correctly designed, yes.
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Begging the question.
Either that is part of my preferences or it isn’t.
Either society is instrumental to my goals or it isn’t.
Game theory? Instrumental rationality? Cultural anthropology?
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
Also, what did you mean by
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
Can all of it be described in those terms? Isn’t that a philosophical claim?
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
It’s irrational to think that the evaluative buck stops with your own preferences.
Maybe he doesn’t care about the “evaluative buck”, which while rather unfortunate, is certainly possible.
If he doesn’t care about rationality, he is still being irrational,
This.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
I think we may have reached the somewhat common on LW point where we’re arguing even though we have no disagreement.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.