Taboo the word “should” and try to ask that question again. I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
If you ask “what should I do tomorrow?”, the implicit second part (the value parameter) could be either “in order to enjoy myself the most” or “in order to make the most people happy” or “in order to make the most money”
You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.
It seems that you’re asking for a universalshould which assumes that there is some universal value or objective morality.
I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
Well, I’m interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:
It seems that you’re asking for a universal should which assumes that there is some universal value or objective morality.
...that you’re making the meta-ethical claim. So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let’s rephrase the question so that we’re asking which goals are right or correct and think about it computationally.
So the process we’re asking the genie to do is:
Generate the list of all possible ultimate, terminal goals.
Pick the “right” one
It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.
Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.
So if we look at the process in a bit more detail, it looks like this:
Generate the list of all possible ultimate, terminal goals
Run each of them through the rightness function to give them each a score
Pick the one with the highest score
So that “Rightness” function is the one that we’re concerned with and I think that’s the core of the problem you’re proposing.
Either this function is a one-place function, meaning that it takes one parameter:
When I said earlier that all “should”-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you’re judging the different possible terminal goals that you can adopt.
Even if you claim that you’re just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it’s a two-place function.
It sounds like you’re asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.
Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don’t believe this and I think that most here on Less Wrong would also disagree with that claim.
That’s why your question doesn’t make sense. To pick a “best” anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won’t just somehow have that knowledge and decide to use that method when you ask him what you “should” do.
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
I’m not saying you can’t ever ask questions about ultimate values
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
Taboo the word “should” and try to ask that question again. I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
If you ask “what should I do tomorrow?”, the implicit second part (the value parameter) could be either “in order to enjoy myself the most” or “in order to make the most people happy” or “in order to make the most money”
You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.
It seems that you’re asking for a universal should which assumes that there is some universal value or objective morality.
Well, I’m interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:
...that you’re making the meta-ethical claim. So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let’s rephrase the question so that we’re asking which goals are right or correct and think about it computationally.
So the process we’re asking the genie to do is:
Generate the list of all possible ultimate, terminal goals.
Pick the “right” one
It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.
Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.
So if we look at the process in a bit more detail, it looks like this:
Generate the list of all possible ultimate, terminal goals
Run each of them through the rightness function to give them each a score
Pick the one with the highest score
So that “Rightness” function is the one that we’re concerned with and I think that’s the core of the problem you’re proposing.
Either this function is a one-place function, meaning that it takes one parameter:
rightness_score(goals) ⇒ score
Or it’s a two-place function, meaning that it takes two parameters
rightness_score(goals, criteria) ⇒ (score)
When I said earlier that all “should”-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you’re judging the different possible terminal goals that you can adopt.
Even if you claim that you’re just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it’s a two-place function.
It sounds like you’re asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.
Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don’t believe this and I think that most here on Less Wrong would also disagree with that claim.
That’s why your question doesn’t make sense. To pick a “best” anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won’t just somehow have that knowledge and decide to use that method when you ask him what you “should” do.
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I don’t quite know what to say about that.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?