I don’t know how to respond to this or Morendil’s second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.
Things like this:
You seem to be confused about free will. Keep reading the Sequences and you won’t be.
Confuse me because as far as I can tell, this has nothing to do with free will. I don’t care about free will. I care about what happens when a perfect predictor enters the room.
Is such a thing just completely impossible? I wouldn’t have expected the answer to this to be Yes.
If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you’ll hear. For example, if I walk up to someone and say, “I’m good at predicting people in simple problems, I’m truthful, and I predict you’ll give me $5,” they won’t give me anything. Since I know this, I won’t make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.
Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of “perfect predictor”? What?
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
I don’t see how including “knowledge of the prediction” into X makes any difference. I don’t see how whatever definition of free will you are using makes any difference.
“Go read the Sequences” is fair enough, but I wouldn’t mind a hint as to what I am supposed to be looking for. “Free will” doesn’t satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, “You cannot predict past a free will choice?”
As it is right now, I haven’t learned anything other than, “You’re wrong.”
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
it is pretty clear that the Newcomb’s Problem setup, if it is to be analyzed in causal terms at all, must have nodes corresponding to logical uncertainty, on pain of violating the axioms governing causal graphs. Furthermore, in being told that Omega’s leaving box B full or empty correlates to our decision to take only one box or both boxes, and that Omega’s act lies in the past, and that Omega’s act is not directly influencing us, and that we have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output, then we’re being told in unambiguous terms (I think) to make our own physical act and Omega’s act a common descendant of the unknown logical output of our known computation.[italics left off]
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
I think the way you phrased some things in the OP and the fact that you called the post “The Fundamental Problem Behind Omega” has confused a lot of people. Afaict your position is exactly right… but the title suggests a problem. What is that problem?!
Yes, but it’s also clear that that would be a non-problem. What I mean is, there is no decision to make in such a problem, because, by assumption, the “you” referred to is a “you” that will give $5. There’s no need to think about what you “would” do because that’s already known.
But likewise, in Newcomb’s problem, the same thing is happening: by assumption, there is no decision left to make. At most, I can “decide” right now, so I make a good choice when the problem comes up, but for the problem as stated, my decision has already been made.
(Then again, it sounds like I’m making the error of fatalism there, but I’m not sure.)
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.
Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it’s impossible for anything, even Omega, to simulate itself perfectly. So a general “perfect predictor” may be impossible. But in this scenario, Omega doesn’t have to be a general perfect predictor; it only has to be a perfect predictor of you.
From Omega’s perspective, after running the simulation, your actions are determined. But you don’t have access to Omega’s simulation, nor could you understand it even if you did. There’s no way for you to know what the results of the computations in your brain will be, without actually running them.
If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer’s concept of free will.
(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a “well-formed” mind without any such rent-shirking spandrels.
Keep in mind that I might be confused about either free will or Newcomb problems.
My first comment above isn’t really intended as an explanation of Newcomb’s original problem, just an explanation of why they elicit feelings of confusion.
My own initial confusion regarding them has (I think) partly evaporated as a result of considering pragmatics, and partly too as a result of reading Julian Barbour’s book on timeless physics on top of the relevant LW sequences.
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
Sounds like you might be having confusion resulting from circular mental causal models. You’ve got an arrow from Omega to X. Wrong direction. You want to reason, “If X is likely to happen, Omega will predict X.”
Sure. So, X implies that Omega will predict X. The four possible states of the universe:
Where X is “You will give Omega $5 if Y happens” and Y is “Omega appears, tells you it predicted X, and asks you for $5″:
1) X is true; Omega does Y 2) X is false; Omega does Y 3) X is true; Omega does not do Y 4) X is false; Omega does not do Y
Number two will not happen because Omega will not predict X when X is false. Omega doesn’t even appear in options 3 and 4, so they aren’t relevant. The last remaining option is:
X is true; Omega does Y. Filling it out:
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
Hmm… that is interesting. X includes a reference to X, which isn’t a problem in language, but could be a problem with the math. The problem is not as simple as putting “you will give Omega $5” in for X because that isn’t strictly what Omega is asking.
The easiest simplification is to take out the part about Omega telling you it predicted X… but that is a significant change that I consider it a different puzzle entirely.
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
That is an interesting math problem. And the math problem has an solution, which is called a quine). So the self-referentialness of the prediction is not by itself a sufficient objection to your scenario.
I don’t know how to respond to this or Morendil’s second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.
Things like this:
Confuse me because as far as I can tell, this has nothing to do with free will. I don’t care about free will. I care about what happens when a perfect predictor enters the room.
Is such a thing just completely impossible? I wouldn’t have expected the answer to this to be Yes.
Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of “perfect predictor”? What?
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
I don’t see how including “knowledge of the prediction” into X makes any difference. I don’t see how whatever definition of free will you are using makes any difference.
“Go read the Sequences” is fair enough, but I wouldn’t mind a hint as to what I am supposed to be looking for. “Free will” doesn’t satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, “You cannot predict past a free will choice?”
As it is right now, I haven’t learned anything other than, “You’re wrong.”
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
Ah, okay, thanks. I can start reading those, then.
I think the way you phrased some things in the OP and the fact that you called the post “The Fundamental Problem Behind Omega” has confused a lot of people. Afaict your position is exactly right… but the title suggests a problem. What is that problem?!
“Problem” as in “Puzzle” not “Problem” as in “Broken Piece.”
Would changing the title to Puzzle help?
So the fundamental puzzle of Omega is: what do you do if he tells you he has predicted you will give him $5?
And the answer is, “Whatever you want to do, but you want to give him $5.” I guess I’m missing the significance of all this.
Yes, but it’s also clear that that would be a non-problem. What I mean is, there is no decision to make in such a problem, because, by assumption, the “you” referred to is a “you” that will give $5. There’s no need to think about what you “would” do because that’s already known.
But likewise, in Newcomb’s problem, the same thing is happening: by assumption, there is no decision left to make. At most, I can “decide” right now, so I make a good choice when the problem comes up, but for the problem as stated, my decision has already been made.
(Then again, it sounds like I’m making the error of fatalism there, but I’m not sure.)
The problem I see is that then you (together with Omega’s prediction about you) becomes something like self-PA.
I thought it was obvious, but people are disagreeing with me, so… I don’t know what that means.
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.
Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it’s impossible for anything, even Omega, to simulate itself perfectly. So a general “perfect predictor” may be impossible. But in this scenario, Omega doesn’t have to be a general perfect predictor; it only has to be a perfect predictor of you.
From Omega’s perspective, after running the simulation, your actions are determined. But you don’t have access to Omega’s simulation, nor could you understand it even if you did. There’s no way for you to know what the results of the computations in your brain will be, without actually running them.
If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer’s concept of free will.
(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a “well-formed” mind without any such rent-shirking spandrels.
Keep in mind that I might be confused about either free will or Newcomb problems.
My first comment above isn’t really intended as an explanation of Newcomb’s original problem, just an explanation of why they elicit feelings of confusion.
My own initial confusion regarding them has (I think) partly evaporated as a result of considering pragmatics, and partly too as a result of reading Julian Barbour’s book on timeless physics on top of the relevant LW sequences.
Okay. That helps, thanks.
Sounds like you might be having confusion resulting from circular mental causal models. You’ve got an arrow from Omega to X. Wrong direction. You want to reason, “If X is likely to happen, Omega will predict X.”
I believe the text you quote is intended to be interpreted as material implication, not causal arrows.
Sure. So, X implies that Omega will predict X. The four possible states of the universe:
Where
X is “You will give Omega $5 if Y happens” and
Y is “Omega appears, tells you it predicted X, and asks you for $5″:
1) X is true; Omega does Y
2) X is false; Omega does Y
3) X is true; Omega does not do Y
4) X is false; Omega does not do Y
Number two will not happen because Omega will not predict X when X is false. Omega doesn’t even appear in options 3 and 4, so they aren’t relevant. The last remaining option is:
X is true; Omega does Y. Filling it out:
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
Hmm… that is interesting. X includes a reference to X, which isn’t a problem in language, but could be a problem with the math. The problem is not as simple as putting “you will give Omega $5” in for X because that isn’t strictly what Omega is asking.
The easiest simplification is to take out the part about Omega telling you it predicted X… but that is a significant change that I consider it a different puzzle entirely.
Is this your objection?
That is an interesting math problem. And the math problem has an solution, which is called a quine). So the self-referentialness of the prediction is not by itself a sufficient objection to your scenario.
Nice, thanks.