I think it would do us all a lot of good (and it would be a lot clearer) to use the word ‘morality’ to mean all the implications that follow from all terminal values, much as we use the word ‘mathematics’ to mean all the theorems that follow from all axioms. This would force us to specify which kind of morality we’re talking about.
For example, it would be meaningless to ask if I should steal from the rich. It would only be meaningful to ask if I me-should steal from the rich (i.e. if it follows from my terminal values), or if I you-should steal from the rich (i.e. if it follows from your terminal values), or if I us-should steal from the rich (i.e. if it follows from the terminal values we share), or if I Americans-should steal from the rich (i.e. if it follows from the terminal values that Americans share), etc.
I know I’m not explaining anything you don’t already know, Eliezer; my point is that your use of the words ‘morality’ and ‘should’ has been confusing quite a few people. Or perhaps it would be more accurate to say that your use of those words has failed to extirpate certain people from their pre-existing confusion.
But then morality does not have as its subject matter “Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one’s own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.”
Instead, it has primarily as its subject matter a list of ways to transform the universe into paperclips, cheesecake, needles, orgasmium, and only finally, a long way down the list, into eudaimonium.
I think this is not the subject matter that most people are talking about when they talk about morality. We should have a different name for this new subject, like “decision theory”.
I think this is not the subject matter that most people are talking about when they talk about morality.
True, as long as they’re talking about the stuff that is implied by their terminal values.
However, when they start talking about the stuff that is implied by other people’s (or aliens’, or AIs’) terminal values, the meaning they attach to the word ‘morality’ is a lot closer to the one I’m proposing. They might say things like, “Well, female genital mutilation is moral to Sudanese people. Um, I mean, errr, uh...”, and then they’re really confused. This confusion would vanish (or at least, would be more likely to vanish) if they were forced to say, “Well, female genital mutilation is Sudanese-moral but me-immoral.”
Ideally, to avoid all confusion we should get rid of the word morality completely, and have everyone speak in terms of goals and desires instead.
Agreed. If it happened that there were only a few different sets of terminal values in existence, then I would be OK with assigning different words to the pursuit of those different sets. One of those words could be ‘moral’. However, as is, the set of all terminal values represented by humans is too fractured and varied.
A large chunk of the list Eliezer provides in the above comment probably is nearly universal to humanity, but the entire list is not, and there are certainly many disputes on the relative ordering (especially as to what is on top).
But then morality does not have as its subject matter....
I think you can keep that definition: define morality and morality-human. However, at least in the metaethics sequence, it would have done a lot of good to distinguish between morality-Joe and morality-Jane even if you were eventually going to argue that the two were equivalent. Once you’re finished arguing that point, however, go on using the term “morality” the way you want to.
I only say this because of my own experience. I didn’t really understand the metaethics sequence when I first read it. I was also struggling with Hume at the time, and it was actually that struggle that led me to make the connection between what an agent “should” do and decision theory. Only later I realized that was exactly what you were doing, and I chalk part of it up to confusing terminology. If you dig through some of the original posts, I was (one of many?) confusing your arguments for classical utilitarianism.
On the other hand, I may not be representative. I’m used to thinking of agent’s utility functions through economics, so the leap to should-X/morality-X connected to X’s utility function was a small one, relatively speaking.
I thought there was no way I could ever understand what Eliezer had written, but you’ve provided a clue. Should I translate this:
Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.
as this?
Human-morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is human-moral, we would agree with them about what is babyeating-moral, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.
Also, what was especially perplexing, translate:
“What should be done with the universe” invokes a criterion of preference, “should”, which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe [...] They do the babyeating thing, we do the right thing;
as:
“What should be done with the universe” invokes a criterion of preference, “human-should”, which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe [...] They do the babyeating-right thing, we do the human-right thing; ?
I understand and agree with your point that the long list of terminal values that most humans share aren’t the ‘right’ ones because they’re values that humans have. If Omega altered the brain of every human so that we had completely different values, ‘morality’ wouldn’t change.
Therefore, to be perfectly precise, byrnema would have to edit her comment to substitute the long list of values that humans happen to share for the word ‘human’, and the long list of values that Babyeaters happen to share for the word ‘babyeating’.
So yeah, I get why someone who doesn’t want to create this kind of confusion in his interlocutors would avoid saying “human-right” and “human-moral”. The problem is that you’re creating another kind of confusion.
It’s because [long list of terminal values that current humans happen to share]-morality is defined by the long list of terminal values that current humans happen to share. It’s not defined by the list of terminal values that post-Omega humans would happen to have.
Is arithmetic “reserved for” a particular list of axioms or for a token for any list of axioms? Neither. Arithmetic is its axioms and all that can be computed from them.
See I think you miss understanding his response. I mean that is the only way I can interpret it to make sense.
Your insistence that it is not the right interpretation is very odd. I get that you don’t want to trigger peoples cooperation instincts, but thats the only framework in which talking about other beings makes sense.
The morality you are talking about is the human-now-extended morality, (well closer to the less-wrong-now-extended morality) in that it is the morality that results from extending from the values humans currently have. Now you seem to have a categorization that need to categorize your own morality as different from others in order to feel right about imposing it? So you categorize it as simply morality, but your morality is is not necessarily my morality and so that categorization feels iffy to me. Now its certainly closer to mine then to the baby eaters, but I have no proof it is the same. Calling it simply Morality papers over this.
You’re wrong. Despite how much I’d like to have a universal, ultimate, true morality, you can’t create it out of whole cloth by defining it as “what-humans-value”. That’s pretending there’s no reason to look up, because, “Look! It’s right there in front of you. So be sure not to look up.”
I think it would do us all a lot of good (and it would be a lot clearer) to use the word ‘morality’ to mean all the implications that follow from all terminal values, much as we use the word ‘mathematics’ to mean all the theorems that follow from all axioms. This would force us to specify which kind of morality we’re talking about.
For example, it would be meaningless to ask if I should steal from the rich. It would only be meaningful to ask if I me-should steal from the rich (i.e. if it follows from my terminal values), or if I you-should steal from the rich (i.e. if it follows from your terminal values), or if I us-should steal from the rich (i.e. if it follows from the terminal values we share), or if I Americans-should steal from the rich (i.e. if it follows from the terminal values that Americans share), etc.
I know I’m not explaining anything you don’t already know, Eliezer; my point is that your use of the words ‘morality’ and ‘should’ has been confusing quite a few people. Or perhaps it would be more accurate to say that your use of those words has failed to extirpate certain people from their pre-existing confusion.
But then morality does not have as its subject matter “Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one’s own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.”
Instead, it has primarily as its subject matter a list of ways to transform the universe into paperclips, cheesecake, needles, orgasmium, and only finally, a long way down the list, into eudaimonium.
I think this is not the subject matter that most people are talking about when they talk about morality. We should have a different name for this new subject, like “decision theory”.
True, as long as they’re talking about the stuff that is implied by their terminal values.
However, when they start talking about the stuff that is implied by other people’s (or aliens’, or AIs’) terminal values, the meaning they attach to the word ‘morality’ is a lot closer to the one I’m proposing. They might say things like, “Well, female genital mutilation is moral to Sudanese people. Um, I mean, errr, uh...”, and then they’re really confused. This confusion would vanish (or at least, would be more likely to vanish) if they were forced to say, “Well, female genital mutilation is Sudanese-moral but me-immoral.”
Ideally, to avoid all confusion we should get rid of the word morality completely, and have everyone speak in terms of goals and desires instead.
Agreed. If it happened that there were only a few different sets of terminal values in existence, then I would be OK with assigning different words to the pursuit of those different sets. One of those words could be ‘moral’. However, as is, the set of all terminal values represented by humans is too fractured and varied.
A large chunk of the list Eliezer provides in the above comment probably is nearly universal to humanity, but the entire list is not, and there are certainly many disputes on the relative ordering (especially as to what is on top).
I think you can keep that definition: define morality and morality-human. However, at least in the metaethics sequence, it would have done a lot of good to distinguish between morality-Joe and morality-Jane even if you were eventually going to argue that the two were equivalent. Once you’re finished arguing that point, however, go on using the term “morality” the way you want to.
I only say this because of my own experience. I didn’t really understand the metaethics sequence when I first read it. I was also struggling with Hume at the time, and it was actually that struggle that led me to make the connection between what an agent “should” do and decision theory. Only later I realized that was exactly what you were doing, and I chalk part of it up to confusing terminology. If you dig through some of the original posts, I was (one of many?) confusing your arguments for classical utilitarianism.
On the other hand, I may not be representative. I’m used to thinking of agent’s utility functions through economics, so the leap to should-X/morality-X connected to X’s utility function was a small one, relatively speaking.
I thought there was no way I could ever understand what Eliezer had written, but you’ve provided a clue. Should I translate this:
as this?
Also, what was especially perplexing, translate:
as:
Yes.
Yes!
No. See other replies.
I understand and agree with your point that the long list of terminal values that most humans share aren’t the ‘right’ ones because they’re values that humans have. If Omega altered the brain of every human so that we had completely different values, ‘morality’ wouldn’t change.
Therefore, to be perfectly precise, byrnema would have to edit her comment to substitute the long list of values that humans happen to share for the word ‘human’, and the long list of values that Babyeaters happen to share for the word ‘babyeating’.
So yeah, I get why someone who doesn’t want to create this kind of confusion in his interlocutors would avoid saying “human-right” and “human-moral”. The problem is that you’re creating another kind of confusion.
Is this because morality is reserved for a particular list - the list we currently have—rather than a token for any list that could be had?
It’s because [long list of terminal values that current humans happen to share]-morality is defined by the long list of terminal values that current humans happen to share. It’s not defined by the list of terminal values that post-Omega humans would happen to have.
Is arithmetic “reserved for” a particular list of axioms or for a token for any list of axioms? Neither. Arithmetic is its axioms and all that can be computed from them.
See I think you miss understanding his response. I mean that is the only way I can interpret it to make sense.
Your insistence that it is not the right interpretation is very odd. I get that you don’t want to trigger peoples cooperation instincts, but thats the only framework in which talking about other beings makes sense.
The morality you are talking about is the human-now-extended morality, (well closer to the less-wrong-now-extended morality) in that it is the morality that results from extending from the values humans currently have. Now you seem to have a categorization that need to categorize your own morality as different from others in order to feel right about imposing it? So you categorize it as simply morality, but your morality is is not necessarily my morality and so that categorization feels iffy to me. Now its certainly closer to mine then to the baby eaters, but I have no proof it is the same. Calling it simply Morality papers over this.
You’re wrong. Despite how much I’d like to have a universal, ultimate, true morality, you can’t create it out of whole cloth by defining it as “what-humans-value”. That’s pretending there’s no reason to look up, because, “Look! It’s right there in front of you. So be sure not to look up.”