Any puzzlement we feel when reading such thought experiments would, I suspect, evaporate if we paid more attention to pragmatics.
The set-up of the scenario (“Suppose that Omega, etc.”) presupposes some things. The question “What do you do?” presupposes other things. Not too surprisingly, these two sets of presuppositions are in conflict.
Specifically, the question “What do you do” presupposes, as parts of its conditions of felicity, that it follows a set-up in which all of the relevant facts have been presented. There is no room left to spring further facts on you later, and we would regard that as cheating. (“You will in fact give $5 to Omega because he has slipped a drug into your drink which causes you to do whatever he suggests you will do!”)
The presuppositions of “What do you do” lead us to assume that we are going about our normal lives, when suddenly some guy appears before us, introduces himself as Omega, says “You will now give me $5″, and looks at us expectantly. Whereupon we nod politely (or maybe say something less polite), and go on our way. From which all we can deduce is that this wasn’t in fact the Omega about which the Tales of Newcomb were written, since he’s just been shown up as an imperfect predictor.
The presuppositions carried by “Omega is a perfect predictor” are of an entirely different order. Logically, whatever predictions Omega makes will in fact turn out to have been correct. But these presuppositions simply don’t match up with those of the “What do you do?” question, in which what determines your behaviour is only the ordinary facts of the world as you know it, plus whatever facts are contained in the scenario that constitutes the set-up of the question.
If Omega is a perfect predictor, all we have is a possible world history, where Omega at some time t appears, makes a prediction, and at some time t’ that prediction has been fulfilled. There is no call to ask a “What do you do” question. The answers are laid out in the specification of the world history.
One-boxing is the correct choice in the original problem, because we are asked to say in which of two world-histories we walk away with $1M, and given the stipulation that there exist no world-histories to choose from in wich we walk away with $1M and two boxes. We’re just led astray by the pragmatics of “What do you do?”.
[EDIT: in case it isn’t clear, and because you said you were curious what people thought the obvious answer was, I think the obvious answer is “get lost”; similarly the obvious answer to the original problem is “I take the two boxes”. The obvious answer just happens to be the incorrect choice. I have changed the paragraph just previous to say “the correct choice” instead of “the correct answer”.
Also, in the previous paragraph I assume I want the $1M, and it is that which makes one-boxing the correct choice. Of course it’s presented as a free-will question, that is, one in which more than one possible world-history is available, and so I can’t rule out unlikely worlds in which I want the $1M but mistakenly pick the wrong world-history.]
Recording an oops: when I wrote the above I didn’t really understand Newcomb’s Problem. I retract pretty much all of the above comment.
I’m now partway through Gary Drescher’s Good and Real and glad that it’s given me a better handle on Newcomb, and that I can now classify my mistake (in my above description of the “original problem”) as “evidentialist”.
I think I understand your point. A reiteration in my words:
The question “What do you do?” implies that the answer is not locked in. If a perfect predictor has made a prediction about what I will do, than the question “What do you do?” is nonsensical.
Am I close?
EDIT: No, this was not a correct interpretation of Morendil’s post. See below. EDIT2: And it has nothing to do with what I think is true.
The question “What do you do?” implies that the answer is not locked in. If a perfect predictor has made a prediction about what I will do, than the question “What do you do?” is nonsensical.
If you don’t know what the prediction is, it’s not nonsensical. You still have to decide what to do.
If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you’ll hear. For example, if I walk up to someone and say, “I’m good at predicting people in simple problems, I’m truthful, and I predict you’ll give me $5,” they won’t give me anything. Since I know this, I won’t make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.
You seem to be confused about free will. Keep reading the Sequences and you won’t be.
I don’t know how to respond to this or Morendil’s second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.
Things like this:
You seem to be confused about free will. Keep reading the Sequences and you won’t be.
Confuse me because as far as I can tell, this has nothing to do with free will. I don’t care about free will. I care about what happens when a perfect predictor enters the room.
Is such a thing just completely impossible? I wouldn’t have expected the answer to this to be Yes.
If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you’ll hear. For example, if I walk up to someone and say, “I’m good at predicting people in simple problems, I’m truthful, and I predict you’ll give me $5,” they won’t give me anything. Since I know this, I won’t make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.
Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of “perfect predictor”? What?
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
I don’t see how including “knowledge of the prediction” into X makes any difference. I don’t see how whatever definition of free will you are using makes any difference.
“Go read the Sequences” is fair enough, but I wouldn’t mind a hint as to what I am supposed to be looking for. “Free will” doesn’t satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, “You cannot predict past a free will choice?”
As it is right now, I haven’t learned anything other than, “You’re wrong.”
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
it is pretty clear that the Newcomb’s Problem setup, if it is to be analyzed in causal terms at all, must have nodes corresponding to logical uncertainty, on pain of violating the axioms governing causal graphs. Furthermore, in being told that Omega’s leaving box B full or empty correlates to our decision to take only one box or both boxes, and that Omega’s act lies in the past, and that Omega’s act is not directly influencing us, and that we have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output, then we’re being told in unambiguous terms (I think) to make our own physical act and Omega’s act a common descendant of the unknown logical output of our known computation.[italics left off]
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
I think the way you phrased some things in the OP and the fact that you called the post “The Fundamental Problem Behind Omega” has confused a lot of people. Afaict your position is exactly right… but the title suggests a problem. What is that problem?!
Yes, but it’s also clear that that would be a non-problem. What I mean is, there is no decision to make in such a problem, because, by assumption, the “you” referred to is a “you” that will give $5. There’s no need to think about what you “would” do because that’s already known.
But likewise, in Newcomb’s problem, the same thing is happening: by assumption, there is no decision left to make. At most, I can “decide” right now, so I make a good choice when the problem comes up, but for the problem as stated, my decision has already been made.
(Then again, it sounds like I’m making the error of fatalism there, but I’m not sure.)
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.
Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it’s impossible for anything, even Omega, to simulate itself perfectly. So a general “perfect predictor” may be impossible. But in this scenario, Omega doesn’t have to be a general perfect predictor; it only has to be a perfect predictor of you.
From Omega’s perspective, after running the simulation, your actions are determined. But you don’t have access to Omega’s simulation, nor could you understand it even if you did. There’s no way for you to know what the results of the computations in your brain will be, without actually running them.
If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer’s concept of free will.
(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a “well-formed” mind without any such rent-shirking spandrels.
Keep in mind that I might be confused about either free will or Newcomb problems.
My first comment above isn’t really intended as an explanation of Newcomb’s original problem, just an explanation of why they elicit feelings of confusion.
My own initial confusion regarding them has (I think) partly evaporated as a result of considering pragmatics, and partly too as a result of reading Julian Barbour’s book on timeless physics on top of the relevant LW sequences.
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
Sounds like you might be having confusion resulting from circular mental causal models. You’ve got an arrow from Omega to X. Wrong direction. You want to reason, “If X is likely to happen, Omega will predict X.”
Sure. So, X implies that Omega will predict X. The four possible states of the universe:
Where X is “You will give Omega $5 if Y happens” and Y is “Omega appears, tells you it predicted X, and asks you for $5″:
1) X is true; Omega does Y 2) X is false; Omega does Y 3) X is true; Omega does not do Y 4) X is false; Omega does not do Y
Number two will not happen because Omega will not predict X when X is false. Omega doesn’t even appear in options 3 and 4, so they aren’t relevant. The last remaining option is:
X is true; Omega does Y. Filling it out:
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
Hmm… that is interesting. X includes a reference to X, which isn’t a problem in language, but could be a problem with the math. The problem is not as simple as putting “you will give Omega $5” in for X because that isn’t strictly what Omega is asking.
The easiest simplification is to take out the part about Omega telling you it predicted X… but that is a significant change that I consider it a different puzzle entirely.
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
That is an interesting math problem. And the math problem has an solution, which is called a quine). So the self-referentialness of the prediction is not by itself a sufficient objection to your scenario.
If by locked in you mean, only a subset of all possible world states are available, then yes, your first sentence is on target.
As to the second, it’s not really a matter of the question making sense. It’s a well-formed English sentence, its meaning is clear, it can be answered, and so on.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed, in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
Suppose I ask you to choose a letter string which conform to the pattern (B|Z)D?. The letter B is worth $1M and the letter D is worth $1K. You are to choose the best possible string. Clearly the possibilities are BD, ZD, B, Z. Now we prefix the strings with one letter, which gives the length of your choice: 2BD, 2ZD, 1B, 1Z.
The original Newcomb scenario boils down to this: conditional on the string not containing both 2 and B (and not containing both 1 and Z), which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”. The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed[...]
Why? I don’t understand why the answers are disconnected from the scenario. Why isn’t all of this included in the concept of a perfect predictor?
[...], in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
So… what if the scenario allows for you to want to give $5? The scenario you are talking about is impossible because Omega wouldn’t have asked you in that scenario. It would have been able to predict your response and would have known better than to ask.
Suppose I ask you to choose a letter[...] which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
Hmm. Okay, that makes sense.
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”.
Are you saying that it doesn’t matter for the question, “Which string choice has the highest expected value?” or the question, “What do you do?” My guess is the latter.
The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
Okay, but I don’t understand how this distinguishes the two questions. If I asked, “What do you do?” what am I asking? Since it’s not “Which string scores best?”
My impression was that asking, “What do you do?” is asking for a decision between all possible end states. Apparently this was a bad impression?
From a standpoint of the psychology of language, when you ask “What do you do”, you’re asking me to envision a plausible scenario—basically to play a movie in my head. If I can visualize myself two-boxing and somehow defying Omega’s prediction, my brain will want to give that answer.
When you ask “What do you do”, you’re talking to the parts of my brain who consider all of 2BD, 2ZD, 1B and 1Z as relevant possibilities (because they have been introduced in the description of the “problem”).
If you formalize first then ask me to pick one of 2ZD or 1B, after pointing out that the other possibilities are eliminated by the Omega constraint, I’m more likely to give the correct answer.
Oh. Okay, yeah, I guess I wasn’t looking for an answer in terms of “What verbal response do you give to my post?” I was looking for an answer strictly in terms of possible scenarios.
Is there a better way to convey that than “What do you do?” Or am I still missing something? Or… ?
Any puzzlement we feel when reading such thought experiments would, I suspect, evaporate if we paid more attention to pragmatics.
The set-up of the scenario (“Suppose that Omega, etc.”) presupposes some things. The question “What do you do?” presupposes other things. Not too surprisingly, these two sets of presuppositions are in conflict.
Specifically, the question “What do you do” presupposes, as parts of its conditions of felicity, that it follows a set-up in which all of the relevant facts have been presented. There is no room left to spring further facts on you later, and we would regard that as cheating. (“You will in fact give $5 to Omega because he has slipped a drug into your drink which causes you to do whatever he suggests you will do!”)
The presuppositions of “What do you do” lead us to assume that we are going about our normal lives, when suddenly some guy appears before us, introduces himself as Omega, says “You will now give me $5″, and looks at us expectantly. Whereupon we nod politely (or maybe say something less polite), and go on our way. From which all we can deduce is that this wasn’t in fact the Omega about which the Tales of Newcomb were written, since he’s just been shown up as an imperfect predictor.
The presuppositions carried by “Omega is a perfect predictor” are of an entirely different order. Logically, whatever predictions Omega makes will in fact turn out to have been correct. But these presuppositions simply don’t match up with those of the “What do you do?” question, in which what determines your behaviour is only the ordinary facts of the world as you know it, plus whatever facts are contained in the scenario that constitutes the set-up of the question.
If Omega is a perfect predictor, all we have is a possible world history, where Omega at some time t appears, makes a prediction, and at some time t’ that prediction has been fulfilled. There is no call to ask a “What do you do” question. The answers are laid out in the specification of the world history.
One-boxing is the correct choice in the original problem, because we are asked to say in which of two world-histories we walk away with $1M, and given the stipulation that there exist no world-histories to choose from in wich we walk away with $1M and two boxes. We’re just led astray by the pragmatics of “What do you do?”.
[EDIT: in case it isn’t clear, and because you said you were curious what people thought the obvious answer was, I think the obvious answer is “get lost”; similarly the obvious answer to the original problem is “I take the two boxes”. The obvious answer just happens to be the incorrect choice. I have changed the paragraph just previous to say “the correct choice” instead of “the correct answer”.
Also, in the previous paragraph I assume I want the $1M, and it is that which makes one-boxing the correct choice. Of course it’s presented as a free-will question, that is, one in which more than one possible world-history is available, and so I can’t rule out unlikely worlds in which I want the $1M but mistakenly pick the wrong world-history.]
Recording an oops: when I wrote the above I didn’t really understand Newcomb’s Problem. I retract pretty much all of the above comment.
I’m now partway through Gary Drescher’s Good and Real and glad that it’s given me a better handle on Newcomb, and that I can now classify my mistake (in my above description of the “original problem”) as “evidentialist”.
I think I understand your point. A reiteration in my words:
The question “What do you do?” implies that the answer is not locked in. If a perfect predictor has made a prediction about what I will do, than the question “What do you do?” is nonsensical.
Am I close?
EDIT: No, this was not a correct interpretation of Morendil’s post. See below.
EDIT2: And it has nothing to do with what I think is true.
If you don’t know what the prediction is, it’s not nonsensical. You still have to decide what to do.
If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you’ll hear. For example, if I walk up to someone and say, “I’m good at predicting people in simple problems, I’m truthful, and I predict you’ll give me $5,” they won’t give me anything. Since I know this, I won’t make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.
You seem to be confused about free will. Keep reading the Sequences and you won’t be.
I don’t know how to respond to this or Morendil’s second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.
Things like this:
Confuse me because as far as I can tell, this has nothing to do with free will. I don’t care about free will. I care about what happens when a perfect predictor enters the room.
Is such a thing just completely impossible? I wouldn’t have expected the answer to this to be Yes.
Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of “perfect predictor”? What?
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
I don’t see how including “knowledge of the prediction” into X makes any difference. I don’t see how whatever definition of free will you are using makes any difference.
“Go read the Sequences” is fair enough, but I wouldn’t mind a hint as to what I am supposed to be looking for. “Free will” doesn’t satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, “You cannot predict past a free will choice?”
As it is right now, I haven’t learned anything other than, “You’re wrong.”
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
Ah, okay, thanks. I can start reading those, then.
I think the way you phrased some things in the OP and the fact that you called the post “The Fundamental Problem Behind Omega” has confused a lot of people. Afaict your position is exactly right… but the title suggests a problem. What is that problem?!
“Problem” as in “Puzzle” not “Problem” as in “Broken Piece.”
Would changing the title to Puzzle help?
So the fundamental puzzle of Omega is: what do you do if he tells you he has predicted you will give him $5?
And the answer is, “Whatever you want to do, but you want to give him $5.” I guess I’m missing the significance of all this.
Yes, but it’s also clear that that would be a non-problem. What I mean is, there is no decision to make in such a problem, because, by assumption, the “you” referred to is a “you” that will give $5. There’s no need to think about what you “would” do because that’s already known.
But likewise, in Newcomb’s problem, the same thing is happening: by assumption, there is no decision left to make. At most, I can “decide” right now, so I make a good choice when the problem comes up, but for the problem as stated, my decision has already been made.
(Then again, it sounds like I’m making the error of fatalism there, but I’m not sure.)
The problem I see is that then you (together with Omega’s prediction about you) becomes something like self-PA.
I thought it was obvious, but people are disagreeing with me, so… I don’t know what that means.
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.
Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it’s impossible for anything, even Omega, to simulate itself perfectly. So a general “perfect predictor” may be impossible. But in this scenario, Omega doesn’t have to be a general perfect predictor; it only has to be a perfect predictor of you.
From Omega’s perspective, after running the simulation, your actions are determined. But you don’t have access to Omega’s simulation, nor could you understand it even if you did. There’s no way for you to know what the results of the computations in your brain will be, without actually running them.
If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer’s concept of free will.
(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a “well-formed” mind without any such rent-shirking spandrels.
Keep in mind that I might be confused about either free will or Newcomb problems.
My first comment above isn’t really intended as an explanation of Newcomb’s original problem, just an explanation of why they elicit feelings of confusion.
My own initial confusion regarding them has (I think) partly evaporated as a result of considering pragmatics, and partly too as a result of reading Julian Barbour’s book on timeless physics on top of the relevant LW sequences.
Okay. That helps, thanks.
Sounds like you might be having confusion resulting from circular mental causal models. You’ve got an arrow from Omega to X. Wrong direction. You want to reason, “If X is likely to happen, Omega will predict X.”
I believe the text you quote is intended to be interpreted as material implication, not causal arrows.
Sure. So, X implies that Omega will predict X. The four possible states of the universe:
Where
X is “You will give Omega $5 if Y happens” and
Y is “Omega appears, tells you it predicted X, and asks you for $5″:
1) X is true; Omega does Y
2) X is false; Omega does Y
3) X is true; Omega does not do Y
4) X is false; Omega does not do Y
Number two will not happen because Omega will not predict X when X is false. Omega doesn’t even appear in options 3 and 4, so they aren’t relevant. The last remaining option is:
X is true; Omega does Y. Filling it out:
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
Hmm… that is interesting. X includes a reference to X, which isn’t a problem in language, but could be a problem with the math. The problem is not as simple as putting “you will give Omega $5” in for X because that isn’t strictly what Omega is asking.
The easiest simplification is to take out the part about Omega telling you it predicted X… but that is a significant change that I consider it a different puzzle entirely.
Is this your objection?
That is an interesting math problem. And the math problem has an solution, which is called a quine). So the self-referentialness of the prediction is not by itself a sufficient objection to your scenario.
Nice, thanks.
If by locked in you mean, only a subset of all possible world states are available, then yes, your first sentence is on target.
As to the second, it’s not really a matter of the question making sense. It’s a well-formed English sentence, its meaning is clear, it can be answered, and so on.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed, in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
Suppose I ask you to choose a letter string which conform to the pattern (B|Z)D?. The letter B is worth $1M and the letter D is worth $1K. You are to choose the best possible string. Clearly the possibilities are BD, ZD, B, Z. Now we prefix the strings with one letter, which gives the length of your choice: 2BD, 2ZD, 1B, 1Z.
The original Newcomb scenario boils down to this: conditional on the string not containing both 2 and B (and not containing both 1 and Z), which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”. The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
Why? I don’t understand why the answers are disconnected from the scenario. Why isn’t all of this included in the concept of a perfect predictor?
So… what if the scenario allows for you to want to give $5? The scenario you are talking about is impossible because Omega wouldn’t have asked you in that scenario. It would have been able to predict your response and would have known better than to ask.
Hmm. Okay, that makes sense.
Are you saying that it doesn’t matter for the question, “Which string choice has the highest expected value?” or the question, “What do you do?” My guess is the latter.
Okay, but I don’t understand how this distinguishes the two questions. If I asked, “What do you do?” what am I asking? Since it’s not “Which string scores best?”
My impression was that asking, “What do you do?” is asking for a decision between all possible end states. Apparently this was a bad impression?
From a standpoint of the psychology of language, when you ask “What do you do”, you’re asking me to envision a plausible scenario—basically to play a movie in my head. If I can visualize myself two-boxing and somehow defying Omega’s prediction, my brain will want to give that answer.
When you ask “What do you do”, you’re talking to the parts of my brain who consider all of 2BD, 2ZD, 1B and 1Z as relevant possibilities (because they have been introduced in the description of the “problem”).
If you formalize first then ask me to pick one of 2ZD or 1B, after pointing out that the other possibilities are eliminated by the Omega constraint, I’m more likely to give the correct answer.
Oh. Okay, yeah, I guess I wasn’t looking for an answer in terms of “What verbal response do you give to my post?” I was looking for an answer strictly in terms of possible scenarios.
Is there a better way to convey that than “What do you do?” Or am I still missing something? Or… ?