I think my third bullet point addresses your comment. You seem to be saying that by choosing to two-box, your influencing the past in such a way that’ll make Omega one-box. I’m saying that there are two possibilities: 1) your choice impacts the past 2) your choice doesn’t impact the past.
If 1) is true, then you should one-box. If 2) is true, then you should two box. I honestly don’t have too strong an opinion regarding whether 1) or whether 2) is the way the world works. But I think that whether 1) or 2) is true is a question of physics, rather than a question of decision theory.
You seem to be confusing the effect with the cause; whether you will choose to one-box or two-box depends on your prior state of mind (personality/knowledge of various decision theories/mood/etc), and it is that prior state of mind which also determines where Omega leaves its money.
The choice doesn’t “influence the past” at all; rather, your brain influences both your and Omega’s future choices.
Consider this sequence of events: you had your prior mind-state, then Omega made his choice, and then you make your choice. You seem to be saying that your choice is already made up from your prior mind-state, and there is no decision to be made after Omega presents you with the situation. This is a possibility.
I’m saying that another possibility is that you do have a choice at that point. And if you have a choice, there are two subsequent options: this choice you make will impact the past, or it won’t. If it does, then you should one-box. But if it doesn’t impact the past (and if you indeed can be making a choice at this point), then you should two-box.
Just saw this in the comment box, so I don’t know the context, but isn’t this based on the confused notion of “free will” employed by … amateur theologians mostly, I think?
For example—and please, tell me if I’m barking up the wrong tree entirely, it’s quite possible—let’s get rid of Omega and replace him with, say, Hannibal Lector.
He has gotten to know you quite well, and has specific knowledge of how you behave in situations like this after you’ve considered the fact that you know he knows you know he knows etc etc.
Is it rational to two-box in this situation, because you have “free will” and thus there’s no way he could know what you’re going to do without a time machine?
I very well might be wrong about how reality works. I’m just saying that if it happens to work in the way I describe, the decision would be obvious. And furthermore, if you specify the way in which reality works, the decision in this situation is always obvious. The debate seems to be more about the way reality works.
Regarding the Hannibal Lector situaiton you propose, I don’t understand it well enough to say, but I think I address all the variations of this question above.
My point is that humans are eminently nonrandom; to the extent that a smart human-level intelligence could probably fill in for Omega.
I think there’s an article here somewhere about how free will and determinism are compatible … I’ll look around for it now...
EDIT:
Another question is what to do before Omega makes his decision.
It seems plausible that Omega could read your mind. So then, you should try to make Omega think that you will one-box. If you’re capable of doing this and it works, then great! If not, you didn’t lose anything by trying, and you gave yourself the chance of possibly suceeding.
If Omega is smart enough, the only way to make it think you will one-box is by being the sort of agent that one-boxes in this situation; regardless of why. So you should one-box because you know that, because that means you’re the sort of agent that one-boxes if they know that. That’s the standard LW position, anyway.
I keep saying that if you specify the physics/reality, the decision to make is obvious. People keep replying by basically saying, “but physics/reality works this way, so this is the answer”. And then I keep replying, “maybe you’re right. I don’t know how it works. all I know is the argument is over physics/reality.”
Do you agree with this? If not, where do you disagree.
Their point (which may or may not be based on a misunderstanding of what you’re talking about) is that one of your options (“free will”) does not correspond to a possible set of the laws of physics—it’s self-contradictory.
People who live in reductionist universes cannot concretely envision non-reductionist universes. They can pronounce the syllables “non-reductionist” but they can’t imagine it.
And if you are smart enough, you should decide what to do by trying to predict what Omega would do. Omega’s attempt to predict your actions may end up becoming undecideable if you’re really smart enough that you can predict Omega.
Or to put it another way, the stipulation that Omega can predict your actions limits how smart you can be and what strategies you can use.
Well, I guess that’s true—presumably the reason the less-intuitive “Omega” is used in the official version. Omega is, by definition, smarter than you—regardless of how smart you personally are.
This is true, but generally the question “what should you do” means “what is the optimal thing to do”. It’s odd to have a problem that stipulates that you cannot find the optimal thing to do and asks what is the next most optimum thing you should do instead.
You seem to be saying that your choice is already made up from your prior mind-state, and there is no decision to be made after Omega presents you with the situation.
Not exactly; just because Omega knows what you will do beforehand with 1-epsilon certainty doesn’t mean you don’t have a choice, just that you will do what you’ll choose to do.
You still make your decision, and just like every other decision you’ve ever made in your life it would be based on your goals values intuitions biases emotions and memories. The only difference is that someone else has already taken all of those things into account and made a projection beforehand. The decision is still real, and you’re still the one who makes it, it’s just that Omega has a faster clock rate and could figure out what that decision would likely be beforehand using the same initial conditions and laws of physics.
I think I agree with your description of how choice works. Regarding the decision you should make, I can’t think of anything to say that I didn’t say before. If the question specifies how reality/physics works, the decision is obvious.
I don’t know whether you’ll have any way of knowing if your choice was made up already. I wish I knew more physics and had a better opinion on the way reality works, but with my understanding, I can’t say.
My approach is to say, “If reality works this way”, then you should do this. If it works that way, then you should do that.”
Regarding your question, I’m not sure that it matters. If ‘yes’, then you don’t have a decision to make. If ‘no’, then I think it depends on the stuff I talked about in above comments.
You seem to be saying that your choice is already made up from your prior mind-state
If your choice is not made up from your prior mind state, then Omega would not be able to predict your actions from it. However, it is a premise of the scenario that he can. Therefore your choice is made up from your prior mind state.
If your choice is not made up from your prior mind state, then Omega would not be able to predict your actions from it.
Not necessarily. We don’t know how Omega makes his predictions.
But regardless, I think my fundamental point still stands: the debate is over physics/reality, not decision theory. If the question specified how physics/reality works, the decision theory part would be easy.
Indeed- to make it more clear, consider a prior mind state that says “when presented with this, I’ll flip a coin to decide (or look at some other random variable).” In this situation, Omega can, at best, predict your choice with 50⁄50 odds. Whether Omega is even a coherent idea depends a great deal on your model of choices.
If given prior mind-state S1 and a blue room I choose A, and given S1 and a pink room I choose B, S1 does not determine whether I choose A or B, but Omega (knowing S1 and the color of the room in which I’ll be offered the choice) can predict whether I choose A or B.
I think my third bullet point addresses your comment. You seem to be saying that by choosing to two-box, your influencing the past in such a way that’ll make Omega one-box. I’m saying that there are two possibilities:
1) your choice impacts the past
2) your choice doesn’t impact the past.
If 1) is true, then you should one-box. If 2) is true, then you should two box. I honestly don’t have too strong an opinion regarding whether 1) or whether 2) is the way the world works. But I think that whether 1) or 2) is true is a question of physics, rather than a question of decision theory.
You seem to be confusing the effect with the cause; whether you will choose to one-box or two-box depends on your prior state of mind (personality/knowledge of various decision theories/mood/etc), and it is that prior state of mind which also determines where Omega leaves its money.
The choice doesn’t “influence the past” at all; rather, your brain influences both your and Omega’s future choices.
Consider this sequence of events: you had your prior mind-state, then Omega made his choice, and then you make your choice. You seem to be saying that your choice is already made up from your prior mind-state, and there is no decision to be made after Omega presents you with the situation. This is a possibility.
I’m saying that another possibility is that you do have a choice at that point. And if you have a choice, there are two subsequent options: this choice you make will impact the past, or it won’t. If it does, then you should one-box. But if it doesn’t impact the past (and if you indeed can be making a choice at this point), then you should two-box.
Just saw this in the comment box, so I don’t know the context, but isn’t this based on the confused notion of “free will” employed by … amateur theologians mostly, I think?
For example—and please, tell me if I’m barking up the wrong tree entirely, it’s quite possible—let’s get rid of Omega and replace him with, say, Hannibal Lector.
He has gotten to know you quite well, and has specific knowledge of how you behave in situations like this after you’ve considered the fact that you know he knows you know he knows etc etc.
Is it rational to two-box in this situation, because you have “free will” and thus there’s no way he could know what you’re going to do without a time machine?
I very well might be wrong about how reality works. I’m just saying that if it happens to work in the way I describe, the decision would be obvious. And furthermore, if you specify the way in which reality works, the decision in this situation is always obvious. The debate seems to be more about the way reality works.
Regarding the Hannibal Lector situaiton you propose, I don’t understand it well enough to say, but I think I address all the variations of this question above.
My point is that humans are eminently nonrandom; to the extent that a smart human-level intelligence could probably fill in for Omega.
I think there’s an article here somewhere about how free will and determinism are compatible … I’ll look around for it now...
EDIT:
If Omega is smart enough, the only way to make it think you will one-box is by being the sort of agent that one-boxes in this situation; regardless of why. So you should one-box because you know that, because that means you’re the sort of agent that one-boxes if they know that. That’s the standard LW position, anyway.
(Free will stuff forthcoming.)
I keep saying that if you specify the physics/reality, the decision to make is obvious. People keep replying by basically saying, “but physics/reality works this way, so this is the answer”. And then I keep replying, “maybe you’re right. I don’t know how it works. all I know is the argument is over physics/reality.”
Do you agree with this? If not, where do you disagree.
Their point (which may or may not be based on a misunderstanding of what you’re talking about) is that one of your options (“free will”) does not correspond to a possible set of the laws of physics—it’s self-contradictory.
I think this is the relevant page. Key quote:
And if you are smart enough, you should decide what to do by trying to predict what Omega would do. Omega’s attempt to predict your actions may end up becoming undecideable if you’re really smart enough that you can predict Omega.
Or to put it another way, the stipulation that Omega can predict your actions limits how smart you can be and what strategies you can use.
Well, I guess that’s true—presumably the reason the less-intuitive “Omega” is used in the official version. Omega is, by definition, smarter than you—regardless of how smart you personally are.
This is true, but generally the question “what should you do” means “what is the optimal thing to do”. It’s odd to have a problem that stipulates that you cannot find the optimal thing to do and asks what is the next most optimum thing you should do instead.
Not exactly; just because Omega knows what you will do beforehand with 1-epsilon certainty doesn’t mean you don’t have a choice, just that you will do what you’ll choose to do.
You still make your decision, and just like every other decision you’ve ever made in your life it would be based on your goals values intuitions biases emotions and memories. The only difference is that someone else has already taken all of those things into account and made a projection beforehand. The decision is still real, and you’re still the one who makes it, it’s just that Omega has a faster clock rate and could figure out what that decision would likely be beforehand using the same initial conditions and laws of physics.
I think I agree with your description of how choice works. Regarding the decision you should make, I can’t think of anything to say that I didn’t say before. If the question specifies how reality/physics works, the decision is obvious.
Is it also your position that I have any way of knowing whether my choice is already made up from my prior mind-state, or not?
I don’t know whether you’ll have any way of knowing if your choice was made up already. I wish I knew more physics and had a better opinion on the way reality works, but with my understanding, I can’t say.
My approach is to say, “If reality works this way”, then you should do this. If it works that way, then you should do that.”
Regarding your question, I’m not sure that it matters. If ‘yes’, then you don’t have a decision to make. If ‘no’, then I think it depends on the stuff I talked about in above comments.
If your choice is not made up from your prior mind state, then Omega would not be able to predict your actions from it. However, it is a premise of the scenario that he can. Therefore your choice is made up from your prior mind state.
Not necessarily. We don’t know how Omega makes his predictions.
But regardless, I think my fundamental point still stands: the debate is over physics/reality, not decision theory. If the question specified how physics/reality works, the decision theory part would be easy.
Indeed- to make it more clear, consider a prior mind state that says “when presented with this, I’ll flip a coin to decide (or look at some other random variable).” In this situation, Omega can, at best, predict your choice with 50⁄50 odds. Whether Omega is even a coherent idea depends a great deal on your model of choices.
If given prior mind-state S1 and a blue room I choose A, and given S1 and a pink room I choose B, S1 does not determine whether I choose A or B, but Omega (knowing S1 and the color of the room in which I’ll be offered the choice) can predict whether I choose A or B.