I very well might be wrong about how reality works. I’m just saying that if it happens to work in the way I describe, the decision would be obvious. And furthermore, if you specify the way in which reality works, the decision in this situation is always obvious. The debate seems to be more about the way reality works.
Regarding the Hannibal Lector situaiton you propose, I don’t understand it well enough to say, but I think I address all the variations of this question above.
My point is that humans are eminently nonrandom; to the extent that a smart human-level intelligence could probably fill in for Omega.
I think there’s an article here somewhere about how free will and determinism are compatible … I’ll look around for it now...
EDIT:
Another question is what to do before Omega makes his decision.
It seems plausible that Omega could read your mind. So then, you should try to make Omega think that you will one-box. If you’re capable of doing this and it works, then great! If not, you didn’t lose anything by trying, and you gave yourself the chance of possibly suceeding.
If Omega is smart enough, the only way to make it think you will one-box is by being the sort of agent that one-boxes in this situation; regardless of why. So you should one-box because you know that, because that means you’re the sort of agent that one-boxes if they know that. That’s the standard LW position, anyway.
I keep saying that if you specify the physics/reality, the decision to make is obvious. People keep replying by basically saying, “but physics/reality works this way, so this is the answer”. And then I keep replying, “maybe you’re right. I don’t know how it works. all I know is the argument is over physics/reality.”
Do you agree with this? If not, where do you disagree.
Their point (which may or may not be based on a misunderstanding of what you’re talking about) is that one of your options (“free will”) does not correspond to a possible set of the laws of physics—it’s self-contradictory.
People who live in reductionist universes cannot concretely envision non-reductionist universes. They can pronounce the syllables “non-reductionist” but they can’t imagine it.
And if you are smart enough, you should decide what to do by trying to predict what Omega would do. Omega’s attempt to predict your actions may end up becoming undecideable if you’re really smart enough that you can predict Omega.
Or to put it another way, the stipulation that Omega can predict your actions limits how smart you can be and what strategies you can use.
Well, I guess that’s true—presumably the reason the less-intuitive “Omega” is used in the official version. Omega is, by definition, smarter than you—regardless of how smart you personally are.
This is true, but generally the question “what should you do” means “what is the optimal thing to do”. It’s odd to have a problem that stipulates that you cannot find the optimal thing to do and asks what is the next most optimum thing you should do instead.
I very well might be wrong about how reality works. I’m just saying that if it happens to work in the way I describe, the decision would be obvious. And furthermore, if you specify the way in which reality works, the decision in this situation is always obvious. The debate seems to be more about the way reality works.
Regarding the Hannibal Lector situaiton you propose, I don’t understand it well enough to say, but I think I address all the variations of this question above.
My point is that humans are eminently nonrandom; to the extent that a smart human-level intelligence could probably fill in for Omega.
I think there’s an article here somewhere about how free will and determinism are compatible … I’ll look around for it now...
EDIT:
If Omega is smart enough, the only way to make it think you will one-box is by being the sort of agent that one-boxes in this situation; regardless of why. So you should one-box because you know that, because that means you’re the sort of agent that one-boxes if they know that. That’s the standard LW position, anyway.
(Free will stuff forthcoming.)
I keep saying that if you specify the physics/reality, the decision to make is obvious. People keep replying by basically saying, “but physics/reality works this way, so this is the answer”. And then I keep replying, “maybe you’re right. I don’t know how it works. all I know is the argument is over physics/reality.”
Do you agree with this? If not, where do you disagree.
Their point (which may or may not be based on a misunderstanding of what you’re talking about) is that one of your options (“free will”) does not correspond to a possible set of the laws of physics—it’s self-contradictory.
I think this is the relevant page. Key quote:
And if you are smart enough, you should decide what to do by trying to predict what Omega would do. Omega’s attempt to predict your actions may end up becoming undecideable if you’re really smart enough that you can predict Omega.
Or to put it another way, the stipulation that Omega can predict your actions limits how smart you can be and what strategies you can use.
Well, I guess that’s true—presumably the reason the less-intuitive “Omega” is used in the official version. Omega is, by definition, smarter than you—regardless of how smart you personally are.
This is true, but generally the question “what should you do” means “what is the optimal thing to do”. It’s odd to have a problem that stipulates that you cannot find the optimal thing to do and asks what is the next most optimum thing you should do instead.