...Slow deep breath… Ignore inflammatory and judgmental comments… Exhale slowly… Resist the urge to downvote… OK, I’m good.
First, as usual, TheOtherDave has already put it better than I could.
Maybe to elaborate just a bit.
First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous ‘apres nous le deluge’ attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.
Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.
Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people’s lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it’s up to them. Live and let live and such.
Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.
I’m actually having difficultly understanding the sentiment “I get annoyed at those who think their morals are better than mine”. I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn’t everyone think their morals are better than other people?
That’s the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don’t really care and certainly don’t think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don’t, then of course I think my morality is better. Morals differ from tastes in that people believe that it’s not just different, but WRONG to not follow them. If you remove that element from morality, what’s left? The sentiment “I have these morals, but other people’s morals are equally valid” sounds good, all egalitarian and such, but it doesn’t make any sense to me. People judge the value of things through their moral system, and saying “System B is as good as System A, based on System A” is borderline nonsensical.
Also, as an aside, I think you should avoid rhetorical statements like “call me heartless if you like” if you’re going to get this upset when someone actually does.
Well, kinda-sorta. I don’t think the subject is amenable to black-and-white thinking.
I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don’t feel that people who think their morals are bad are to be admired and emulated either.
There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn’t good either.
So would you say that moral systems that don’t think they’re better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
I know it’s possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
No, I don’t think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there’s no reason to consider your own value system to be the very best there is, especially given that it’s your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
To me, a moral belief and a factual belief are approximately equal
Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics—it’s a system of “hard” facts and, generally speaking, they are either correct or not. As you say, “On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong.”
To me morals is more like preferences—a system of flexible way to evaluate choices. You can have multiple ways to do that and they don’t have to be either correct or not.
Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let’s take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong. They are just different and she has full right to have her own.
That does not apply to everything, of course. There are “zones” where I’m fine with opposite morals and there are “zones” where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.
If I wished I held certain moral beliefs, I already have them.
No, I don’t think so. Morals are values, not desires. It’s not particularly common to wish to hold different values (I think), but I don’t see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn’t too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.
In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn’t happen in a single moment.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
No, I don’t think so. (and following text)
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
I’m having difficulty understanding how you can NOT say that her morality is wrong.
I think the terms “acceptable” and “not acceptable” are much better here than right and wrong.
If the positions were reversed, I might find Alice’s morality unacceptable to me, but I still wouldn’t call it wrong.
I’m not sure why you’re calling different values “Different sides” as though they are separate agents.
No, I’m not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob’s (let’s call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob’s conscious personality, but it still is having difficulty controlling his drive to win.
Or let’s take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are… let’s say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands—consciously—that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain’t in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.
It’s not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
So while I wouldn’t murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn’t seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn’t seem to provide any new information.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”?
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality “Has empathy and values survival and survival is impaired by murder”.
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and “Has a moral system that prohibits murder” is a quality that successfully creates offspring that typically have the quality “Has a moral system that prohibits murder”.
The different quality “Commits wanton murder” is less successful at creating offspring in modern society, because convicted murderers don’t get to teach children that committing wanton murder is something to do.
It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to “call [you] heartless”, you should not then complain of someone else’s “inflammatory and judgmental comments” when they take you up on the offer.
And it doesn’t seem to me that Hedonic_Treader’s response was particularly thoughtless or disrespectful.
(For what it’s worth, I don’t think your comments indicate that you’re heartless.)
It’s interesting because people will often accuse a low status out group of “thinking they are better than everyone else” *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw …. until I saw Hedonic Treader’s comment.
I do sort of understand the attitude of the utilitarian EA’s. If you really believe that everyone must value everyone else’s life equally, then you’d be horrified by people’s brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.
*I’ve seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc
I expect that’s correct, but I’m not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:
A thinks her group is better than others.
A’s thinking this is obvious enough for B to be able to discern it with some confidence.
A never explicitly says that her group is better than others.
and I think people who say (e.g.) that atheists think they’re smarter than everyone else would claim that that’s what’s happening.
I repeat, I agree that these accusations are usually pretty strawy, but it’s a slightly more complicated variety of straw than simply claiming that people have said things they haven’t. More specifically, I think the usual situation is something like this:
A really does think that, to some extent and in some respects, her group is better than others.
But so does everyone else.
B imagines that he’s discerned unusual or unreasonable opinions of this sort in A.
But really he hasn’t; at most he’s picked up on something that he could find anywhere if he chose to look.
[EDITED to add, for clarity:] By “But so does everyone else” I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn’t say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that’s partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
I do imagine that the first situation is more common, in general, than the second.
This is entirely because of the point:
But so does everyone else.
A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.
Sorry, I wasn’t clear enough. By “so does everyone else” I meant “everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others”.
Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I’m not sure that it’s actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.
but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that’s partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream—by my standards; it doesn’t follow that I also believe it’s better by your standards or by objective standards (whatever they might be) and feel smug about it.
Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.
(Even food preferences—a thing so notoriously subjective that the very word “taste” is used in other contexts to indicate something subjective and personal—can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)
Perhaps to avoid confusion, my comment wasn’t intended as an in-group out-group thing or even as a statement about my own relative status.
“Better than” and “worse than” are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It’s a rather simple statement of relative moral status.
Here’s the problem: If we pretend—like some in the rationalist community do—that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.
Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.
If we pretend—like some in the rationalist community do—that all behavior is morally equivalent and all morals are equal
That’s a strawman. I haven’t seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn’t make them equal, by the way).
there is no social incentive to behave prosocially when possible
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
moral judgments have their legitimate place in any on-topic discourse.
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?
By morality I mean a particular part of somebody’s system of values. Roughly speaking, morality is the socially relevant part of the value system (though that’s not a hard definition, but rather a pointer to the area where you should search for it).
...Slow deep breath… Ignore inflammatory and judgmental comments… Exhale slowly… Resist the urge to downvote… OK, I’m good.
First, as usual, TheOtherDave has already put it better than I could.
Maybe to elaborate just a bit.
First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous ‘apres nous le deluge’ attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.
Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.
Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people’s lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it’s up to them. Live and let live and such.
Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.
I’m actually having difficultly understanding the sentiment “I get annoyed at those who think their morals are better than mine”. I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn’t everyone think their morals are better than other people?
That’s the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don’t really care and certainly don’t think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don’t, then of course I think my morality is better. Morals differ from tastes in that people believe that it’s not just different, but WRONG to not follow them. If you remove that element from morality, what’s left? The sentiment “I have these morals, but other people’s morals are equally valid” sounds good, all egalitarian and such, but it doesn’t make any sense to me. People judge the value of things through their moral system, and saying “System B is as good as System A, based on System A” is borderline nonsensical.
Also, as an aside, I think you should avoid rhetorical statements like “call me heartless if you like” if you’re going to get this upset when someone actually does.
I don’t.
Would you make that a normative statement?
Well, kinda-sorta. I don’t think the subject is amenable to black-and-white thinking.
I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don’t feel that people who think their morals are bad are to be admired and emulated either.
There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn’t good either.
So would you say that moral systems that don’t think they’re better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)
In one particular aspect, yes. There are many aspects.
The barber shaves everyone who doesn’t shave himself..? X-)
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
I know it’s possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
You are confused between two very different statements:
(1) I don’t think that my morals are (always, necessarily) better than other people’s.
(2) I have no basis whatsoever for judging morality and/or behavior of other people.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
No, I don’t think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there’s no reason to consider your own value system to be the very best there is, especially given that it’s your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
That’s already an excellent start :-)
Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics—it’s a system of “hard” facts and, generally speaking, they are either correct or not. As you say, “On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong.”
To me morals is more like preferences—a system of flexible way to evaluate choices. You can have multiple ways to do that and they don’t have to be either correct or not.
Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let’s take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong. They are just different and she has full right to have her own.
That does not apply to everything, of course. There are “zones” where I’m fine with opposite morals and there are “zones” where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.
No, I don’t think so. Morals are values, not desires. It’s not particularly common to wish to hold different values (I think), but I don’t see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn’t too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.
In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn’t happen in a single moment.
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
I think the terms “acceptable” and “not acceptable” are much better here than right and wrong.
If the positions were reversed, I might find Alice’s morality unacceptable to me, but I still wouldn’t call it wrong.
No, I’m not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob’s (let’s call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob’s conscious personality, but it still is having difficulty controlling his drive to win.
Or let’s take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are… let’s say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands—consciously—that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain’t in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.
Sure, but do you accept that other people have?
I think akrasia could also be an issue of being mistaken about your beliefs, all of which you’re not conscious of at any given time.
It’s not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So while I wouldn’t murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn’t seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn’t seem to provide any new information.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality “Has empathy and values survival and survival is impaired by murder”.
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and “Has a moral system that prohibits murder” is a quality that successfully creates offspring that typically have the quality “Has a moral system that prohibits murder”.
The different quality “Commits wanton murder” is less successful at creating offspring in modern society, because convicted murderers don’t get to teach children that committing wanton murder is something to do.
I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.
It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to “call [you] heartless”, you should not then complain of someone else’s “inflammatory and judgmental comments” when they take you up on the offer.
And it doesn’t seem to me that Hedonic_Treader’s response was particularly thoughtless or disrespectful.
(For what it’s worth, I don’t think your comments indicate that you’re heartless.)
It’s interesting because people will often accuse a low status out group of “thinking they are better than everyone else” *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw …. until I saw Hedonic Treader’s comment.
I do sort of understand the attitude of the utilitarian EA’s. If you really believe that everyone must value everyone else’s life equally, then you’d be horrified by people’s brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.
*I’ve seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc
I expect that’s correct, but I’m not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:
A thinks her group is better than others.
A’s thinking this is obvious enough for B to be able to discern it with some confidence.
A never explicitly says that her group is better than others.
and I think people who say (e.g.) that atheists think they’re smarter than everyone else would claim that that’s what’s happening.
I repeat, I agree that these accusations are usually pretty strawy, but it’s a slightly more complicated variety of straw than simply claiming that people have said things they haven’t. More specifically, I think the usual situation is something like this:
A really does think that, to some extent and in some respects, her group is better than others.
But so does everyone else.
B imagines that he’s discerned unusual or unreasonable opinions of this sort in A.
But really he hasn’t; at most he’s picked up on something that he could find anywhere if he chose to look.
[EDITED to add, for clarity:] By “But so does everyone else” I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn’t say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that’s partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
I do imagine that the first situation is more common, in general, than the second.
This is entirely because of the point:
But so does everyone else.
A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.
Sorry, I wasn’t clear enough. By “so does everyone else” I meant “everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others”.
Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I’m not sure that it’s actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.
Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream—by my standards; it doesn’t follow that I also believe it’s better by your standards or by objective standards (whatever they might be) and feel smug about it.
Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.
(Even food preferences—a thing so notoriously subjective that the very word “taste” is used in other contexts to indicate something subjective and personal—can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)
Perhaps to avoid confusion, my comment wasn’t intended as an in-group out-group thing or even as a statement about my own relative status.
“Better than” and “worse than” are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It’s a rather simple statement of relative moral status.
Here’s the problem: If we pretend—like some in the rationalist community do—that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.
Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.
That’s a strawman. I haven’t seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn’t make them equal, by the way).
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
Why is that?
What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?
By morality I mean a particular part of somebody’s system of values. Roughly speaking, morality is the socially relevant part of the value system (though that’s not a hard definition, but rather a pointer to the area where you should search for it).
It seems self termination was the most altruistic way of ending the discussion. A tad over the top I think.
One can judge “judgmentalism on set A” without being “judgemental on set A” (while, of course, still being judgmental on set B).