So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
I know it’s possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
No, I don’t think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there’s no reason to consider your own value system to be the very best there is, especially given that it’s your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
To me, a moral belief and a factual belief are approximately equal
Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics—it’s a system of “hard” facts and, generally speaking, they are either correct or not. As you say, “On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong.”
To me morals is more like preferences—a system of flexible way to evaluate choices. You can have multiple ways to do that and they don’t have to be either correct or not.
Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let’s take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong. They are just different and she has full right to have her own.
That does not apply to everything, of course. There are “zones” where I’m fine with opposite morals and there are “zones” where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.
If I wished I held certain moral beliefs, I already have them.
No, I don’t think so. Morals are values, not desires. It’s not particularly common to wish to hold different values (I think), but I don’t see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn’t too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.
In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn’t happen in a single moment.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
No, I don’t think so. (and following text)
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
I’m having difficulty understanding how you can NOT say that her morality is wrong.
I think the terms “acceptable” and “not acceptable” are much better here than right and wrong.
If the positions were reversed, I might find Alice’s morality unacceptable to me, but I still wouldn’t call it wrong.
I’m not sure why you’re calling different values “Different sides” as though they are separate agents.
No, I’m not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob’s (let’s call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob’s conscious personality, but it still is having difficulty controlling his drive to win.
Or let’s take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are… let’s say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands—consciously—that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain’t in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.
It’s not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
So while I wouldn’t murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn’t seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn’t seem to provide any new information.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”?
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality “Has empathy and values survival and survival is impaired by murder”.
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and “Has a moral system that prohibits murder” is a quality that successfully creates offspring that typically have the quality “Has a moral system that prohibits murder”.
The different quality “Commits wanton murder” is less successful at creating offspring in modern society, because convicted murderers don’t get to teach children that committing wanton murder is something to do.
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
I know it’s possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
You are confused between two very different statements:
(1) I don’t think that my morals are (always, necessarily) better than other people’s.
(2) I have no basis whatsoever for judging morality and/or behavior of other people.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
No, I don’t think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there’s no reason to consider your own value system to be the very best there is, especially given that it’s your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
That’s already an excellent start :-)
Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics—it’s a system of “hard” facts and, generally speaking, they are either correct or not. As you say, “On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong.”
To me morals is more like preferences—a system of flexible way to evaluate choices. You can have multiple ways to do that and they don’t have to be either correct or not.
Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let’s take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong. They are just different and she has full right to have her own.
That does not apply to everything, of course. There are “zones” where I’m fine with opposite morals and there are “zones” where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.
No, I don’t think so. Morals are values, not desires. It’s not particularly common to wish to hold different values (I think), but I don’t see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn’t too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.
In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn’t happen in a single moment.
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
I think the terms “acceptable” and “not acceptable” are much better here than right and wrong.
If the positions were reversed, I might find Alice’s morality unacceptable to me, but I still wouldn’t call it wrong.
No, I’m not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob’s (let’s call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob’s conscious personality, but it still is having difficulty controlling his drive to win.
Or let’s take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are… let’s say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands—consciously—that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain’t in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.
Sure, but do you accept that other people have?
I think akrasia could also be an issue of being mistaken about your beliefs, all of which you’re not conscious of at any given time.
It’s not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So while I wouldn’t murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn’t seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn’t seem to provide any new information.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality “Has empathy and values survival and survival is impaired by murder”.
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and “Has a moral system that prohibits murder” is a quality that successfully creates offspring that typically have the quality “Has a moral system that prohibits murder”.
The different quality “Commits wanton murder” is less successful at creating offspring in modern society, because convicted murderers don’t get to teach children that committing wanton murder is something to do.
I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.