“Should” implies a goal according to some set of values. Since the genie is mischievous, it might e.g. tell you what you should ask for so as to make the consequences maximally amusing to the genie.
“Should” implies a goal according to some set of values.
Do you mean this as an empirical claim about the way we use the word? I think it’s at least grammatical to say ‘What should my ultimate, terminal goals be?’ Why can’t I ask the genie that?
Taboo the word “should” and try to ask that question again. I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
If you ask “what should I do tomorrow?”, the implicit second part (the value parameter) could be either “in order to enjoy myself the most” or “in order to make the most people happy” or “in order to make the most money”
You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.
It seems that you’re asking for a universalshould which assumes that there is some universal value or objective morality.
I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
Well, I’m interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:
It seems that you’re asking for a universal should which assumes that there is some universal value or objective morality.
...that you’re making the meta-ethical claim. So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let’s rephrase the question so that we’re asking which goals are right or correct and think about it computationally.
So the process we’re asking the genie to do is:
Generate the list of all possible ultimate, terminal goals.
Pick the “right” one
It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.
Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.
So if we look at the process in a bit more detail, it looks like this:
Generate the list of all possible ultimate, terminal goals
Run each of them through the rightness function to give them each a score
Pick the one with the highest score
So that “Rightness” function is the one that we’re concerned with and I think that’s the core of the problem you’re proposing.
Either this function is a one-place function, meaning that it takes one parameter:
When I said earlier that all “should”-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you’re judging the different possible terminal goals that you can adopt.
Even if you claim that you’re just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it’s a two-place function.
It sounds like you’re asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.
Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don’t believe this and I think that most here on Less Wrong would also disagree with that claim.
That’s why your question doesn’t make sense. To pick a “best” anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won’t just somehow have that knowledge and decide to use that method when you ask him what you “should” do.
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
I’m not saying you can’t ever ask questions about ultimate values
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
“Should” implies a goal according to some set of values.
Let me rephrase my question: Suppose I presented a series of well-crafted studies which show that people often use the word ‘should’ without intending to make reference to an assumed set of terminal values. I mean that people often use the word ‘should’ to ask questions like ‘What should my ultimate, terminal values be?’
Would your reaction to these studies be:
1) I guess I was wrong when I said that ‘”Should” implies a goal according to some set of values’. Apparently people use the word ‘should’ to talk about the values themselves and without necessarily implying a higher up set of values.
or
2) Many people appear to be confused about what ‘should’ means. Though it appears to be a well formed English sentence, the question ‘what should my ultimate, terminal values be?’ is in fact nonsense.
In other words, when you say that ‘should’ implies a goal according to some set of values, are you making a claim about language, such as might be found in a dictionary, or are you making a claim about meta-ethical facts, such as might be found in a philosophy paper?
Do you have a response to the question I put to him? If it’s true that asking after values or goals or criteria always involves presupposing some higher up goals, values or criteria, then does it follow from this that we can’t ask after terminal goals, values, or ultimate criteria? If not, why not?
Yes and no. You could just decide on some definition of terminal goals or ultimate criteria, and ask the genie what we should do to achieve the goals as you define them. But it’s up to you (or someone else who you trust) to come up with that definition first, and the only “objective” criteria for what that definition should be like is something along the lines of “am I happy with this definition and its likely consequences”.
and the only “objective” criteria for what that definition should be like is something along the lines of “am I happy with this definition and its likely consequences”.
And you would say that the above doesn’t involve any antecedent criteria upon which this judgement is based, say for determining the value of this or that consequence?
Can you explain why? Why couldn’t that be exactly what I’m asking for?
“Should” implies a goal according to some set of values. Since the genie is mischievous, it might e.g. tell you what you should ask for so as to make the consequences maximally amusing to the genie.
Do you mean this as an empirical claim about the way we use the word? I think it’s at least grammatical to say ‘What should my ultimate, terminal goals be?’ Why can’t I ask the genie that?
Taboo the word “should” and try to ask that question again. I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
If you ask “what should I do tomorrow?”, the implicit second part (the value parameter) could be either “in order to enjoy myself the most” or “in order to make the most people happy” or “in order to make the most money”
You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.
It seems that you’re asking for a universal should which assumes that there is some universal value or objective morality.
Well, I’m interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:
...that you’re making the meta-ethical claim. So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let’s rephrase the question so that we’re asking which goals are right or correct and think about it computationally.
So the process we’re asking the genie to do is:
Generate the list of all possible ultimate, terminal goals.
Pick the “right” one
It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.
Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.
So if we look at the process in a bit more detail, it looks like this:
Generate the list of all possible ultimate, terminal goals
Run each of them through the rightness function to give them each a score
Pick the one with the highest score
So that “Rightness” function is the one that we’re concerned with and I think that’s the core of the problem you’re proposing.
Either this function is a one-place function, meaning that it takes one parameter:
rightness_score(goals) ⇒ score
Or it’s a two-place function, meaning that it takes two parameters
rightness_score(goals, criteria) ⇒ (score)
When I said earlier that all “should”-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you’re judging the different possible terminal goals that you can adopt.
Even if you claim that you’re just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it’s a two-place function.
It sounds like you’re asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.
Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don’t believe this and I think that most here on Less Wrong would also disagree with that claim.
That’s why your question doesn’t make sense. To pick a “best” anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won’t just somehow have that knowledge and decide to use that method when you ask him what you “should” do.
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I don’t quite know what to say about that.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
Not sure what you’re asking. I guess it could be either an empirical or a logical claim, depending on which way you want to put it.
Sure it’s grammatical and sure you can, but if you don’t specify what the “should” means, you might not like the answer. See my above comment.
Let me rephrase my question: Suppose I presented a series of well-crafted studies which show that people often use the word ‘should’ without intending to make reference to an assumed set of terminal values. I mean that people often use the word ‘should’ to ask questions like ‘What should my ultimate, terminal values be?’
Would your reaction to these studies be:
1) I guess I was wrong when I said that ‘”Should” implies a goal according to some set of values’. Apparently people use the word ‘should’ to talk about the values themselves and without necessarily implying a higher up set of values.
or
2) Many people appear to be confused about what ‘should’ means. Though it appears to be a well formed English sentence, the question ‘what should my ultimate, terminal values be?’ is in fact nonsense.
In other words, when you say that ‘should’ implies a goal according to some set of values, are you making a claim about language, such as might be found in a dictionary, or are you making a claim about meta-ethical facts, such as might be found in a philosophy paper?
Or do you mean something else entirely?
I endorse the answers that TylerJay gave to this question, he’s saying basically the same thing as I was trying to get at.
Do you have a response to the question I put to him? If it’s true that asking after values or goals or criteria always involves presupposing some higher up goals, values or criteria, then does it follow from this that we can’t ask after terminal goals, values, or ultimate criteria? If not, why not?
Yes and no. You could just decide on some definition of terminal goals or ultimate criteria, and ask the genie what we should do to achieve the goals as you define them. But it’s up to you (or someone else who you trust) to come up with that definition first, and the only “objective” criteria for what that definition should be like is something along the lines of “am I happy with this definition and its likely consequences”.
And you would say that the above doesn’t involve any antecedent criteria upon which this judgement is based, say for determining the value of this or that consequence?