There is a function Should(human) (or Should(Eliezer)) which computes the human consensus (or Eliezer’s opinion) on what the morally correct course of action is.
And some alien beliefs have their own Should function which would be, in form if not in content, similar to our own. So a paperclip maximiser doesn’t get a should, as it simply follows a “figure out how to maximise paper clips—then do it” format. However a complex alien society that has many values and feels they must kill everyone else for the artistic cohesion of the universe, but often fails to act on this feeling because of akrasia, will get a Should(Krikkit) function.
However, until such time as we meet this alien civilization, we should just use Should as a shorthand for Should(human).
There could be a word defined that way, but for purposes of staying unconfused about morality, I prefer to use “would-want” so that “should” is reserved specifically for things that, you know, actually ought to be done.
Fair enough. But are you saying that there is an objective standard of ought, or do you just mean a shared subjective standard? Or maybe a single subjective standard?
The word “ought” means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values. There’s just nothing which says that other minds necessarily care about them. It is also possible that different humans care about different things, but there’s enough overlap that it makes sense (I believe, Greene does not) to use words like “ought” in daily communication.
What would the universe look like if there were such a thing as an “objective standard”? If you can’t tell me what the universe looks like in this case, then the statement “there is an objective morality” is not false—it’s not that there’s a closet which is supposed to contain an objective morality, and we looked inside it, and the closet is empty—but rather the statement fails to have a truth-condition. Sort of like opening a suitcase that actually does contain a million dollars, and you say “But I want an objective million dollars”, and you can’t say what the universe would look like if the million dollars were objective or not.
I should write a post at some point about how we should learn to be content with happiness instead of “true happiness”, truth instead of “ultimate truth”, purpose instead of “transcendental purpose”, and morality instead of “objective morality”. It’s not that we can’t obtain these other things and so must be satisfied with what we have, but rather that tacking on an impressive adjective results in an impressive phrase that fails to mean anything. It is not that there is no ultimate truth, but rather, that there is no closet which might contain or fail to contain “ultimate truth”, it’s just the word “truth” with the sonorous-sounding adjective “ultimate” tacked on in front. Truth is all there is or coherently could be.
I should write a post at some point about how we should learn to be content with happiness instead of “true happiness”, truth instead of “ultimate truth”, purpose instead of “transcendental purpose”, and morality instead of “objective morality”.
When you put those together like that it occurs to me that they all share the feature of being provably final. I.e., when you have true happiness you can stop working on happiness; when you have ultimate truth you can stop looking for truth; when you know an objective morality you can stop thinking about morality. So humans are always striving to end striving.
(Of course whether they’d be happy if they actually ended striving is a different question, and one you’ve written eloquently about in the “fun theory” series.)
The word “ought” means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values. There’s just nothing which says that other minds necessarily care about them. It is also possible that different humans care about different things, but there’s enough overlap that it makes sense (I believe, Greene does not) to use words like “ought” in daily communication.
Just a minor thought: there is a great deal of overlap on human “ought”s, but not so much on formal philosphical “ought”s. Dealing with philosophers often, I prefer to see ought as a function, so I can talk of “ought(Kantian)” and “ought(utilitarian)”.
Maybe Greene has more encounters with formal philosophers than you, and thus cannot see much overlap?
Re: “The word “ought” means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values.”
A reveling and amazing comment—from my point of view. I had no idea you believed that.
What about alien “ought”s? Presumably you can hack the idea that aliens might see morality rather differently from us. So, presumably you are talking about ought—glossing over our differences from one another.
There’s a human morality in about the same sense as there’s a human height.
There are no alien oughts, though there are alien desires and alien would-wants. They don’t see morality differently from us; the criterion by which they choose is simply not that which we name morality.
There’s a human morality in about the same sense as there’s a human height.
This is a wonderful epigram, though it might be too optimistic. The far more pessimistic version would be “There’s a human morality in about the same sense as there’s a human language.” (This is what Greene seems to believe and it’s a dispute of fact.)
Eliezer, I think your proposed semantics of “ought” is confusing, and doesn’t match up very well with ordinary usage. May I suggest the following alternative?
ought refer’s to X’s would-wants if X is an individual. If X is a group, then ought is the overlap between the oughts of its members.
In ordinary conversation, when people use “ought” without an explicit subscript or possessive, the implicit X is the speaker plus the intended audience (not humanity as a whole).
ETA: The reason we use “ought” is to convince the audience to do or not do something, right? Why would we want to refer to ought, when ought would work just fine for that purpose, and ought covers a lot more ground than ought?
“There’s a human morality in about the same sense as there’s a human language.” (This is what Greene seems to believe and it’s a dispute of fact.)
That seems to hit close to the mark. Human language contains all sorts of features that are more or less universal to humans due to their hardware while also being significantly determined by cultural influences. It also shares the feature that certain types of language (and ‘ought’ systems) are more useful in different cultures or subcultures.
This is a wonderful epigram, though it might be too optimistic. The far more pessimistic version would be
I’m not sure I follow this. Neither seem particularly pessimistic to me and I’m not sure how one could be worse than the other.
Jumping recklessly in at the middle: even granting your premises regarding the scope of ‘ought’, it is not wholly clear that an alien “ought” is impossible. As timtyler pointed out, the Babyeaters in “Three Worlds Collide” probably had a would-want structure within the “ought” cluster in thingspace, and systems of behaviors have been observed in some nonhuman animals which resemble human morality.
I’m not saying it’s likely, though, so this probably constitutes nitpicking.
“There are no alien oughts” and “They don’t see morality differently from us”—these seem like more bizarre-sounding views on the subject of morality—and it seems especially curious to hear them from the author of the “Baby-Eating Aliens” story.
Look, it’s not very complicated: When you see Eliezer write “morality” or “oughts”, read it as “human morality” and “human oughts”.
It isn’t that simple either. Human morality contains a significant component of trying to coerce other humans into doing things that benefit you. Even on a genetic level humans come with significantly different ways of processing moral thoughts. What is often called ‘personality’, particularly in the context of ‘personality type’.
The translation I find useful is to read it as “Eliezer-would-want”. By the definitions Eliezer has given us the two must be identical. (Except, perhaps if Eliezer has for some reason decided to make himself immoral a priori.)
Well then, I don’t understand why you would find statements like “There are no alien [human oughts]” and “They don’t see [human morality] differently from us” bizarre-sounding.
It is not that there is no ultimate truth, but rather, that there is no closet which might contain or fail to contain “ultimate truth”, it’s just the word “truth” with the sonorous-sounding adjective “ultimate” tacked on in front. Truth is all there is or coherently could be.
It is [...] possible that different humans care about different things, but there’s enough overlap that it makes sense (I believe, Greene does not) to use words like “ought” in daily communication.
Fair enough. But are you saying that there is an objective standard of ought, or do you just mean a shared subjective standard? Or maybe a single subjective standard?
A single subjective standard. But he uses different terminology, with that difference having implications about how morality should (full Eliezer meaning) be thought about.
It can be superficially considered to be a shared subjective standard in as much as many other humans have morality that overlaps with his in some ways and also in the sense that his morality includes (if I recall correctly) the preferences of others somewhere within it. I find it curious that the final result leaves language and positions that are reminiscent of those begot by a belief in an objective standard of ought but without requiring totally insane beliefs like, say, theism or predicting that a uFAI will learn ‘compassion’ and become a FAI just because ‘should’ is embedded in the universe as an inevitable force or something.
Still, if I am to translate the Eliezer word into the language of Stuart_Armstrong it matches “a single subjective standardbut I’m really serious about it”. (Part of me wonders if Eliezer’s position on this particular branch of semantics would be any different if there were less non-sequitur rejections of Bayesian statistics with that pesky ‘subjective’ word in it.)
What I think you mean is:
There is a function Should(human) (or Should(Eliezer)) which computes the human consensus (or Eliezer’s opinion) on what the morally correct course of action is.
And some alien beliefs have their own Should function which would be, in form if not in content, similar to our own. So a paperclip maximiser doesn’t get a should, as it simply follows a “figure out how to maximise paper clips—then do it” format. However a complex alien society that has many values and feels they must kill everyone else for the artistic cohesion of the universe, but often fails to act on this feeling because of akrasia, will get a Should(Krikkit) function.
However, until such time as we meet this alien civilization, we should just use Should as a shorthand for Should(human).
Is my understanding correct?
There could be a word defined that way, but for purposes of staying unconfused about morality, I prefer to use “would-want” so that “should” is reserved specifically for things that, you know, actually ought to be done.
“would-want”—under what circumstances? Superficially, it seems like pointless jargon. Is there a description somewhere of what it is supposed to mean?
Hmm. I guess not.
Fair enough. But are you saying that there is an objective standard of ought, or do you just mean a shared subjective standard? Or maybe a single subjective standard?
The word “ought” means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values. There’s just nothing which says that other minds necessarily care about them. It is also possible that different humans care about different things, but there’s enough overlap that it makes sense (I believe, Greene does not) to use words like “ought” in daily communication.
What would the universe look like if there were such a thing as an “objective standard”? If you can’t tell me what the universe looks like in this case, then the statement “there is an objective morality” is not false—it’s not that there’s a closet which is supposed to contain an objective morality, and we looked inside it, and the closet is empty—but rather the statement fails to have a truth-condition. Sort of like opening a suitcase that actually does contain a million dollars, and you say “But I want an objective million dollars”, and you can’t say what the universe would look like if the million dollars were objective or not.
I should write a post at some point about how we should learn to be content with happiness instead of “true happiness”, truth instead of “ultimate truth”, purpose instead of “transcendental purpose”, and morality instead of “objective morality”. It’s not that we can’t obtain these other things and so must be satisfied with what we have, but rather that tacking on an impressive adjective results in an impressive phrase that fails to mean anything. It is not that there is no ultimate truth, but rather, that there is no closet which might contain or fail to contain “ultimate truth”, it’s just the word “truth” with the sonorous-sounding adjective “ultimate” tacked on in front. Truth is all there is or coherently could be.
When you put those together like that it occurs to me that they all share the feature of being provably final. I.e., when you have true happiness you can stop working on happiness; when you have ultimate truth you can stop looking for truth; when you know an objective morality you can stop thinking about morality. So humans are always striving to end striving.
(Of course whether they’d be happy if they actually ended striving is a different question, and one you’ve written eloquently about in the “fun theory” series.)
That’s actually an excellent way of thinking about it—perhaps the terms are not as meaningless as I thought.
Just a minor thought: there is a great deal of overlap on human “ought”s, but not so much on formal philosphical “ought”s. Dealing with philosophers often, I prefer to see ought as a function, so I can talk of “ought(Kantian)” and “ought(utilitarian)”.
Maybe Greene has more encounters with formal philosophers than you, and thus cannot see much overlap?
Re: “The word “ought” means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values.”
A reveling and amazing comment—from my point of view. I had no idea you believed that.
What about alien “ought”s? Presumably you can hack the idea that aliens might see morality rather differently from us. So, presumably you are talking about ought—glossing over our differences from one another.
There’s a human morality in about the same sense as there’s a human height.
There are no alien oughts, though there are alien desires and alien would-wants. They don’t see morality differently from us; the criterion by which they choose is simply not that which we name morality.
This is a wonderful epigram, though it might be too optimistic. The far more pessimistic version would be “There’s a human morality in about the same sense as there’s a human language.” (This is what Greene seems to believe and it’s a dispute of fact.)
Eliezer, I think your proposed semantics of “ought” is confusing, and doesn’t match up very well with ordinary usage. May I suggest the following alternative?
ought refer’s to X’s would-wants if X is an individual. If X is a group, then ought is the overlap between the oughts of its members.
In ordinary conversation, when people use “ought” without an explicit subscript or possessive, the implicit X is the speaker plus the intended audience (not humanity as a whole).
ETA: The reason we use “ought” is to convince the audience to do or not do something, right? Why would we want to refer to ought, when ought would work just fine for that purpose, and ought covers a lot more ground than ought?
That seems to hit close to the mark. Human language contains all sorts of features that are more or less universal to humans due to their hardware while also being significantly determined by cultural influences. It also shares the feature that certain types of language (and ‘ought’ systems) are more useful in different cultures or subcultures.
I’m not sure I follow this. Neither seem particularly pessimistic to me and I’m not sure how one could be worse than the other.
Jumping recklessly in at the middle: even granting your premises regarding the scope of ‘ought’, it is not wholly clear that an alien “ought” is impossible. As timtyler pointed out, the Babyeaters in “Three Worlds Collide” probably had a would-want structure within the “ought” cluster in thingspace, and systems of behaviors have been observed in some nonhuman animals which resemble human morality.
I’m not saying it’s likely, though, so this probably constitutes nitpicking.
“There are no alien oughts” and “They don’t see morality differently from us”—these seem like more bizarre-sounding views on the subject of morality—and it seems especially curious to hear them from the author of the “Baby-Eating Aliens” story.
Look, it’s not very complicated: When you see Eliezer write “morality” or “oughts”, read it as “human morality” and “human oughts”.
It isn’t that simple either. Human morality contains a significant component of trying to coerce other humans into doing things that benefit you. Even on a genetic level humans come with significantly different ways of processing moral thoughts. What is often called ‘personality’, particularly in the context of ‘personality type’.
The translation I find useful is to read it as “Eliezer-would-want”. By the definitions Eliezer has given us the two must be identical. (Except, perhaps if Eliezer has for some reason decided to make himself immoral a priori.)
Um, that’s what I just said: “presumably you are talking about ought”.
We were then talking about the meaning of ought.
There’s also the issue of whether to discuss ought and ought—which are evidently quite different—due to the shifting moral zeitgeist.
Well then, I don’t understand why you would find statements like “There are no alien [human oughts]” and “They don’t see [human morality] differently from us” bizarre-sounding.
Having established EY meant ought, I was asking about ought.
Maybe you are right—and EY misinterpreted me—and genuinely thought I was asking about ought.
If so, that seems like a rather ridiculous question for me to be asking—and I’m surprised it made it through his sanity checker.
Even if “morality” means “criterion for choosing..”? Their criterion might have a different referent, but that does not imply a different sense. cf. “This planet”. Out of the two, sense has more to do with meaning, since it doesn’t change with changes of place and time.
Then we need a better way of distinguishing between what we’re doing and what we would be doing if we were better at it.
You’ve written about the difference between rationality and believing that one’s bad arguments are rational.
For the person who is in the latter state, something that might be called “true rationality” is unimaginable, but it exists.
Thanks, this has made your position clear. And—apart from tiny differences in vocabulary—it is exactly the same as mine.
So what about the Ultimate Showdown of Ultimate Destiny?
...sorry, couldn’t resist.
But there is a truth-condition for whether a showdown is “ultimate” or not.
This sentence is much clearer than the sort of thing you usually say.
A single subjective standard. But he uses different terminology, with that difference having implications about how morality should (full Eliezer meaning) be thought about.
It can be superficially considered to be a shared subjective standard in as much as many other humans have morality that overlaps with his in some ways and also in the sense that his morality includes (if I recall correctly) the preferences of others somewhere within it. I find it curious that the final result leaves language and positions that are reminiscent of those begot by a belief in an objective standard of ought but without requiring totally insane beliefs like, say, theism or predicting that a uFAI will learn ‘compassion’ and become a FAI just because ‘should’ is embedded in the universe as an inevitable force or something.
Still, if I am to translate the Eliezer word into the language of Stuart_Armstrong it matches “a single subjective standard but I’m really serious about it”. (Part of me wonders if Eliezer’s position on this particular branch of semantics would be any different if there were less non-sequitur rejections of Bayesian statistics with that pesky ‘subjective’ word in it.)