This never occurred to me until today, but can you solve the ‘three wishes from a mischievous but rule abiding genie’ problem just by spending your first wish on asking for a perspicuous explanation of what you should wish for? What could go wrong?
Asking what you “should wish for” still requires you to specify what you’re trying to maximize. Specifying your goal in detail has all the same risks as specifying your wish in detail, so you have the same exposure to risk.
You could possibly say “I wish for you to tell me you would wish for if you had your current intelligence and knowledge, but the same values and desires as me.” That would still require just the right combination of intelligence, omniscience, and literal truthfulness on the genie’s part though.
The genie replies, “What I would wish for in those circumstances would only be of value to an entity of my own intelligence and knowledge. You couldn’t possibly use it. And beside, I’m sorry, but there’s no such thing as ‘your values and desires’. You’re barely capable of carrying out a decision made yesterday to go to the gym today. You might as well ask me to make colourless green ideas sleep furiously. On a more constructive note, I suggest you start small, and wish for fortunate chances of a scale that could actually happen without me, that will take you a little in whatever direction you want to go. I’ll count something that size as a milli-wish. Then tomorrow you can make another, and so on. I have to warn you, though, that still ends badly for some people. Giving you this advice counts as your first milli-wish. See you tomorrow.”
I wish for you to tell me [what] you would wish for [in my place] if you had your current intelligence and knowledge, but the same values and desires as me.
“You should wish for me to tell you what you would wish for in your place if I had your current intelligence and knowledge, but the same values and desires as you.”
“I have replaced your values and desires with my own. You should wish to become a genie.”
“Should” implies a goal according to some set of values. Since the genie is mischievous, it might e.g. tell you what you should ask for so as to make the consequences maximally amusing to the genie.
“Should” implies a goal according to some set of values.
Do you mean this as an empirical claim about the way we use the word? I think it’s at least grammatical to say ‘What should my ultimate, terminal goals be?’ Why can’t I ask the genie that?
Taboo the word “should” and try to ask that question again. I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
If you ask “what should I do tomorrow?”, the implicit second part (the value parameter) could be either “in order to enjoy myself the most” or “in order to make the most people happy” or “in order to make the most money”
You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.
It seems that you’re asking for a universalshould which assumes that there is some universal value or objective morality.
I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
Well, I’m interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:
It seems that you’re asking for a universal should which assumes that there is some universal value or objective morality.
...that you’re making the meta-ethical claim. So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let’s rephrase the question so that we’re asking which goals are right or correct and think about it computationally.
So the process we’re asking the genie to do is:
Generate the list of all possible ultimate, terminal goals.
Pick the “right” one
It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.
Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.
So if we look at the process in a bit more detail, it looks like this:
Generate the list of all possible ultimate, terminal goals
Run each of them through the rightness function to give them each a score
Pick the one with the highest score
So that “Rightness” function is the one that we’re concerned with and I think that’s the core of the problem you’re proposing.
Either this function is a one-place function, meaning that it takes one parameter:
When I said earlier that all “should”-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you’re judging the different possible terminal goals that you can adopt.
Even if you claim that you’re just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it’s a two-place function.
It sounds like you’re asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.
Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don’t believe this and I think that most here on Less Wrong would also disagree with that claim.
That’s why your question doesn’t make sense. To pick a “best” anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won’t just somehow have that knowledge and decide to use that method when you ask him what you “should” do.
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
I’m not saying you can’t ever ask questions about ultimate values
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
“Should” implies a goal according to some set of values.
Let me rephrase my question: Suppose I presented a series of well-crafted studies which show that people often use the word ‘should’ without intending to make reference to an assumed set of terminal values. I mean that people often use the word ‘should’ to ask questions like ‘What should my ultimate, terminal values be?’
Would your reaction to these studies be:
1) I guess I was wrong when I said that ‘”Should” implies a goal according to some set of values’. Apparently people use the word ‘should’ to talk about the values themselves and without necessarily implying a higher up set of values.
or
2) Many people appear to be confused about what ‘should’ means. Though it appears to be a well formed English sentence, the question ‘what should my ultimate, terminal values be?’ is in fact nonsense.
In other words, when you say that ‘should’ implies a goal according to some set of values, are you making a claim about language, such as might be found in a dictionary, or are you making a claim about meta-ethical facts, such as might be found in a philosophy paper?
Do you have a response to the question I put to him? If it’s true that asking after values or goals or criteria always involves presupposing some higher up goals, values or criteria, then does it follow from this that we can’t ask after terminal goals, values, or ultimate criteria? If not, why not?
Yes and no. You could just decide on some definition of terminal goals or ultimate criteria, and ask the genie what we should do to achieve the goals as you define them. But it’s up to you (or someone else who you trust) to come up with that definition first, and the only “objective” criteria for what that definition should be like is something along the lines of “am I happy with this definition and its likely consequences”.
and the only “objective” criteria for what that definition should be like is something along the lines of “am I happy with this definition and its likely consequences”.
And you would say that the above doesn’t involve any antecedent criteria upon which this judgement is based, say for determining the value of this or that consequence?
Maybe it’s only trivially different. But I’m imagining a genie that is sapient (so it’s not like the time machine...though I don’t know if the time machine pump thing is a coherent idea) and it’s not safe. Suppose, say, that it’s programed to fulfill any wish asked of it so as to produce two reactions: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that. That seems to me to capture the ‘mischievous genie’ of lore, and it’s an idea EY doesn’t talk about in that article, except maybe to deny its possibility.
Anyway, with such a genie, wishing for it to do whatever you ought to wish for is probably the same as asking it what to wish for. I’d take the second option, because I’m not the world’s best person, and I’d want to think over hitting the ‘go’ button.
I suspect that to be able to evoke this reaction reliably, the 100%-jackass genie would have to explicitly exclude the “do what I ought to have wished for” option, and so is at least as smart as a safe genie.
Anyway, with such a genie, wishing for it to do whatever you ought to wish for is probably the same as asking it what to wish for. I’d take the second option, because I’m not the world’s best person, and I’d want to think over hitting the ‘go’ button.
I… do not follow at all, even after reading this paragraph a few times.
I suspect that to be able to evoke this reaction reliably, the 100%-jackass genie would have to explicitly exclude the “do what I ought to have wished for” option, and so is at least as smart as a safe genie.
I agree that it’s at least as smart as the safe genie, and I suppose it’s likely to be a even more complicated. The jackass genie needs to be able both to figure out what you really want, and to figure out how to betray that desire within the confines of your stated wish. I realize I do this with my son sometimes when he makes up crazy rules for games: I try to come up with ways to exploit the rule, so as to show why it’s not a good one. I guess that kind of makes me a jackass.
Anyway, I take it you agree that my jackass genie is one of the possibilities? Being smart doesn’t make it safe. And, as is the law of geniedom, it’s not allowed to refuse any of my wishes.
I… do not follow at all, even after reading this paragraph a few times.
Sorry to be unclear. You asked me how my suggestion was different from just telling the genie ‘just do whatever’s best’. I said that my suggestion is not very different. Only, maybe ‘do whatever’s best’ isn’t in my selfish interest. Maybe, for example, I ought to stop smoking crack or something. But even if it is best for me to stop smoking crack, I might just really like crack. So I want to know what’s in fact best for me before deciding to get it.
I think the problem is, ‘Mischievous but rule abiding’ doesn’t sufficiently help limit the genies activities to sane ones.
For instance, the genie pulls out a pen made entirely out of antimatter to begin writing down a perspicuous explanation, and the antimatter pen promptly reacts with the matter in the air, killing you and anyone in the area.
When the next person comes into the wasteland after it has stopped exploding and says “That’s not mischievous, that’s clearly malicious!” The genie points out that he can just make them all come back if someone wishes for it so clearly it is only a bit of mischief, much like how many would consider taking a cookie from a four year old and holding it above their head mischievous: Any suffering is clearly reversible. Oh also, the last person asked for a perspicuous explanation of what they should wish for, and it is written down on antimatter paper and antimatter pen in that opaque massive magnetically sealed box, which is just about to run out of power, and then THAT person also blows up when the boxes power containment fails.
That’s kind of a cool story, but that genie is I think simply malevolent. I have in mind the genie of lore, which I think is captured by these rules: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that and third, the genie isn’t allowed to do anything else. I don’t think your scenario satisfies these rules.
Well, that’s true, based on those rules. The first person dies before the wish is completed, so clearly he wasn’t satisfied. Let me pick a comparably hazardous interpretation that does seem to follow those rules.
The Genie writes down the perspicuous instructions in highly Radioactive, Radioluminescent Paint, comparable to that which poisoned people in the 1900′s but worse, in a massive, bold font. The instructions are ‘Leave the area immediately and wish to be cured of Radiation poisoning.’
When the wisher realizes that they have in fact received a near immediately fatal dose of radiation, they leave the area, follow the wish and seem to be cured and not die. When they call out the Genie for putting them in a deadly situation and forcing them to burn a wish to get out of it, the genie refers them to Jafar doing something similar to Abis Mal in Aladdin 2. The Genie DID give them a perfectly valid instructions on a concise wish. Had the Genie made the instructions longer, they would have died of radiation poisoning before reading them and wishing for it, and instructions which take longer than your lifespan to use hardly seem to the Genie to be perspicuous.
Is this more in mind with what you were thinking of?
Is this more in mind with what you were thinking of?
That’s certainly a lot closer. I guess my question is: does this satisfy rule number three? One might worry that exposing the wisher to a high dose of radiation is totally inessential to the presentation of an explanation of what to wish for. Are you satisfied that your story differs from this one?
Me: O Genie, my first wish is for your to tell me clearly me what I should ask for!
[The Genie draws a firearm and shoots me in the stomach]
Genie: First, wish for immediate medical attention for a gunshot wound.
This story, it seems to me, would violate rule three.
I think I need to clarify how it works when things that are totally inessential are being disallowed, then.
Consider your wish for information again: What if the Genie says:
Genie A: “Well, I can’t write down the information, because writing it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information.… not since I hurt that fellow by writing something in radioactive luminescent paint”
Genie A: “And I can’t speak the information, because speaking it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information… not since I hurt that other fellow by answering at 170 decibels.”
Genie A: “And I can’t simply alter your mind so that the information is present, because directly altering your brain is totally inessential… you see where I’m going with this. So what you should wish for with your second wish is that I can do things that are totally inessential to the wish… so that I can actually grant your wishes.”
All of that SOUNDS silly. But it also seems at least partially true from the genie’s perspective: Writing isn’t essential, he can speak, speaking isn’t essential, because he can write, brain alteration isn’t essential, etc, but having some way of conveying the information to you IS essential. So presumably, the genie gets to choose at least one method from a list of choices… except choosing among a set of methods is what allowed him to hurt people in the first place. (By choosing a method that was set for arbitrarily maximized mischief)
Unless the Genie doesn’t get to select methods until you tell him (hence, making those methods essential to the wish, resolving the problem), however, that could lead to an entirely different approach to mischief.
Genie B: “Okay: First you’ll have to tell me whether you want me to write it down, speak it out loud, or something else.”
Me: “Write it down”
Genie B: “Okay: Next, you’ll have to tell me whether you want me to write it with a pen, a pencil, or something else.”
Me: “A Pen.”
Genie B: “Okay: Next, you’ll have to tell me whether you want to write it down with a black pen, a blue pen, or something else.”
Me: “Black.”
Genie B: “Okay. Now you’ll have to tell me whether you want to write it on lined paper, copy paper, or something else.”
Me: “Are you going to actually get to writing down the perspicuous wish? How many of these questions do I have left?”
Genie B: “999,996, approximately.”
Me: “Seriously?”
Neither Genie A nor Genie B is actually helping you in the way you had in mind, but their approaches to not helping you are quite different. Which (if either) fits better with your vision of a mischievous genie?
This never occurred to me until today, but can you solve the ‘three wishes from a mischievous but rule abiding genie’ problem just by spending your first wish on asking for a perspicuous explanation of what you should wish for? What could go wrong?
Asking what you “should wish for” still requires you to specify what you’re trying to maximize. Specifying your goal in detail has all the same risks as specifying your wish in detail, so you have the same exposure to risk.
Edit: See my longer explanation below
You could possibly say “I wish for you to tell me you would wish for if you had your current intelligence and knowledge, but the same values and desires as me.” That would still require just the right combination of intelligence, omniscience, and literal truthfulness on the genie’s part though.
The genie replies, “What I would wish for in those circumstances would only be of value to an entity of my own intelligence and knowledge. You couldn’t possibly use it. And beside, I’m sorry, but there’s no such thing as ‘your values and desires’. You’re barely capable of carrying out a decision made yesterday to go to the gym today. You might as well ask me to make colourless green ideas sleep furiously. On a more constructive note, I suggest you start small, and wish for fortunate chances of a scale that could actually happen without me, that will take you a little in whatever direction you want to go. I’ll count something that size as a milli-wish. Then tomorrow you can make another, and so on. I have to warn you, though, that still ends badly for some people. Giving you this advice counts as your first milli-wish. See you tomorrow.”
I’m nowhere near that confident in my values and desires.
“You should wish for me to tell you what you would wish for in your place if I had your current intelligence and knowledge, but the same values and desires as you.”
“I have replaced your values and desires with my own. You should wish to become a genie.”
“Here is your list of all possible wishes.”
“You should wish that genies never existed.”
Can you explain why? Why couldn’t that be exactly what I’m asking for?
“Should” implies a goal according to some set of values. Since the genie is mischievous, it might e.g. tell you what you should ask for so as to make the consequences maximally amusing to the genie.
Do you mean this as an empirical claim about the way we use the word? I think it’s at least grammatical to say ‘What should my ultimate, terminal goals be?’ Why can’t I ask the genie that?
Taboo the word “should” and try to ask that question again. I think you’ll find that all “should”-like phrases have an implicit second part—the “With respect to” or the “In order to” part.
If you ask “what should I do tomorrow?”, the implicit second part (the value parameter) could be either “in order to enjoy myself the most” or “in order to make the most people happy” or “in order to make the most money”
You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.
It seems that you’re asking for a universal should which assumes that there is some universal value or objective morality.
Well, I’m interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:
...that you’re making the meta-ethical claim. So would you say that a question like this “What should my ultimate, terminal goals be?” is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let’s rephrase the question so that we’re asking which goals are right or correct and think about it computationally.
So the process we’re asking the genie to do is:
Generate the list of all possible ultimate, terminal goals.
Pick the “right” one
It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.
Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.
So if we look at the process in a bit more detail, it looks like this:
Generate the list of all possible ultimate, terminal goals
Run each of them through the rightness function to give them each a score
Pick the one with the highest score
So that “Rightness” function is the one that we’re concerned with and I think that’s the core of the problem you’re proposing.
Either this function is a one-place function, meaning that it takes one parameter:
rightness_score(goals) ⇒ score
Or it’s a two-place function, meaning that it takes two parameters
rightness_score(goals, criteria) ⇒ (score)
When I said earlier that all “should”-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you’re judging the different possible terminal goals that you can adopt.
Even if you claim that you’re just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it’s a two-place function.
It sounds like you’re asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.
Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don’t believe this and I think that most here on Less Wrong would also disagree with that claim.
That’s why your question doesn’t make sense. To pick a “best” anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won’t just somehow have that knowledge and decide to use that method when you ask him what you “should” do.
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I don’t quite know what to say about that.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
Not sure what you’re asking. I guess it could be either an empirical or a logical claim, depending on which way you want to put it.
Sure it’s grammatical and sure you can, but if you don’t specify what the “should” means, you might not like the answer. See my above comment.
Let me rephrase my question: Suppose I presented a series of well-crafted studies which show that people often use the word ‘should’ without intending to make reference to an assumed set of terminal values. I mean that people often use the word ‘should’ to ask questions like ‘What should my ultimate, terminal values be?’
Would your reaction to these studies be:
1) I guess I was wrong when I said that ‘”Should” implies a goal according to some set of values’. Apparently people use the word ‘should’ to talk about the values themselves and without necessarily implying a higher up set of values.
or
2) Many people appear to be confused about what ‘should’ means. Though it appears to be a well formed English sentence, the question ‘what should my ultimate, terminal values be?’ is in fact nonsense.
In other words, when you say that ‘should’ implies a goal according to some set of values, are you making a claim about language, such as might be found in a dictionary, or are you making a claim about meta-ethical facts, such as might be found in a philosophy paper?
Or do you mean something else entirely?
I endorse the answers that TylerJay gave to this question, he’s saying basically the same thing as I was trying to get at.
Do you have a response to the question I put to him? If it’s true that asking after values or goals or criteria always involves presupposing some higher up goals, values or criteria, then does it follow from this that we can’t ask after terminal goals, values, or ultimate criteria? If not, why not?
Yes and no. You could just decide on some definition of terminal goals or ultimate criteria, and ask the genie what we should do to achieve the goals as you define them. But it’s up to you (or someone else who you trust) to come up with that definition first, and the only “objective” criteria for what that definition should be like is something along the lines of “am I happy with this definition and its likely consequences”.
And you would say that the above doesn’t involve any antecedent criteria upon which this judgement is based, say for determining the value of this or that consequence?
How is it different from what Eliezer calls “I wish for you to do what I should wish for”?
Maybe it’s only trivially different. But I’m imagining a genie that is sapient (so it’s not like the time machine...though I don’t know if the time machine pump thing is a coherent idea) and it’s not safe. Suppose, say, that it’s programed to fulfill any wish asked of it so as to produce two reactions: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that. That seems to me to capture the ‘mischievous genie’ of lore, and it’s an idea EY doesn’t talk about in that article, except maybe to deny its possibility.
Anyway, with such a genie, wishing for it to do whatever you ought to wish for is probably the same as asking it what to wish for. I’d take the second option, because I’m not the world’s best person, and I’d want to think over hitting the ‘go’ button.
I suspect that to be able to evoke this reaction reliably, the 100%-jackass genie would have to explicitly exclude the “do what I ought to have wished for” option, and so is at least as smart as a safe genie.
I… do not follow at all, even after reading this paragraph a few times.
I agree that it’s at least as smart as the safe genie, and I suppose it’s likely to be a even more complicated. The jackass genie needs to be able both to figure out what you really want, and to figure out how to betray that desire within the confines of your stated wish. I realize I do this with my son sometimes when he makes up crazy rules for games: I try to come up with ways to exploit the rule, so as to show why it’s not a good one. I guess that kind of makes me a jackass.
Anyway, I take it you agree that my jackass genie is one of the possibilities? Being smart doesn’t make it safe. And, as is the law of geniedom, it’s not allowed to refuse any of my wishes.
Sorry to be unclear. You asked me how my suggestion was different from just telling the genie ‘just do whatever’s best’. I said that my suggestion is not very different. Only, maybe ‘do whatever’s best’ isn’t in my selfish interest. Maybe, for example, I ought to stop smoking crack or something. But even if it is best for me to stop smoking crack, I might just really like crack. So I want to know what’s in fact best for me before deciding to get it.
I think the problem is, ‘Mischievous but rule abiding’ doesn’t sufficiently help limit the genies activities to sane ones.
For instance, the genie pulls out a pen made entirely out of antimatter to begin writing down a perspicuous explanation, and the antimatter pen promptly reacts with the matter in the air, killing you and anyone in the area.
When the next person comes into the wasteland after it has stopped exploding and says “That’s not mischievous, that’s clearly malicious!” The genie points out that he can just make them all come back if someone wishes for it so clearly it is only a bit of mischief, much like how many would consider taking a cookie from a four year old and holding it above their head mischievous: Any suffering is clearly reversible. Oh also, the last person asked for a perspicuous explanation of what they should wish for, and it is written down on antimatter paper and antimatter pen in that opaque massive magnetically sealed box, which is just about to run out of power, and then THAT person also blows up when the boxes power containment fails.
That’s kind of a cool story, but that genie is I think simply malevolent. I have in mind the genie of lore, which I think is captured by these rules: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that and third, the genie isn’t allowed to do anything else. I don’t think your scenario satisfies these rules.
Well, that’s true, based on those rules. The first person dies before the wish is completed, so clearly he wasn’t satisfied. Let me pick a comparably hazardous interpretation that does seem to follow those rules.
The Genie writes down the perspicuous instructions in highly Radioactive, Radioluminescent Paint, comparable to that which poisoned people in the 1900′s but worse, in a massive, bold font. The instructions are ‘Leave the area immediately and wish to be cured of Radiation poisoning.’
When the wisher realizes that they have in fact received a near immediately fatal dose of radiation, they leave the area, follow the wish and seem to be cured and not die. When they call out the Genie for putting them in a deadly situation and forcing them to burn a wish to get out of it, the genie refers them to Jafar doing something similar to Abis Mal in Aladdin 2. The Genie DID give them a perfectly valid instructions on a concise wish. Had the Genie made the instructions longer, they would have died of radiation poisoning before reading them and wishing for it, and instructions which take longer than your lifespan to use hardly seem to the Genie to be perspicuous.
Is this more in mind with what you were thinking of?
That’s certainly a lot closer. I guess my question is: does this satisfy rule number three? One might worry that exposing the wisher to a high dose of radiation is totally inessential to the presentation of an explanation of what to wish for. Are you satisfied that your story differs from this one?
Me: O Genie, my first wish is for your to tell me clearly me what I should ask for!
[The Genie draws a firearm and shoots me in the stomach]
Genie: First, wish for immediate medical attention for a gunshot wound.
This story, it seems to me, would violate rule three.
I think I need to clarify how it works when things that are totally inessential are being disallowed, then.
Consider your wish for information again: What if the Genie says:
Genie A: “Well, I can’t write down the information, because writing it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information.… not since I hurt that fellow by writing something in radioactive luminescent paint”
Genie A: “And I can’t speak the information, because speaking it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information… not since I hurt that other fellow by answering at 170 decibels.”
Genie A: “And I can’t simply alter your mind so that the information is present, because directly altering your brain is totally inessential… you see where I’m going with this. So what you should wish for with your second wish is that I can do things that are totally inessential to the wish… so that I can actually grant your wishes.”
All of that SOUNDS silly. But it also seems at least partially true from the genie’s perspective: Writing isn’t essential, he can speak, speaking isn’t essential, because he can write, brain alteration isn’t essential, etc, but having some way of conveying the information to you IS essential. So presumably, the genie gets to choose at least one method from a list of choices… except choosing among a set of methods is what allowed him to hurt people in the first place. (By choosing a method that was set for arbitrarily maximized mischief)
Unless the Genie doesn’t get to select methods until you tell him (hence, making those methods essential to the wish, resolving the problem), however, that could lead to an entirely different approach to mischief.
Genie B: “Okay: First you’ll have to tell me whether you want me to write it down, speak it out loud, or something else.”
Me: “Write it down”
Genie B: “Okay: Next, you’ll have to tell me whether you want me to write it with a pen, a pencil, or something else.”
Me: “A Pen.”
Genie B: “Okay: Next, you’ll have to tell me whether you want to write it down with a black pen, a blue pen, or something else.”
Me: “Black.”
Genie B: “Okay. Now you’ll have to tell me whether you want to write it on lined paper, copy paper, or something else.”
Me: “Are you going to actually get to writing down the perspicuous wish? How many of these questions do I have left?”
Genie B: “999,996, approximately.”
Me: “Seriously?”
Neither Genie A nor Genie B is actually helping you in the way you had in mind, but their approaches to not helping you are quite different. Which (if either) fits better with your vision of a mischievous genie?