It’s too late to convince Onega that you’re going to 1 box.
You seem to be thinking about Omega as if he’s a mind-reader that can only be affected by your thoughts at the time he set the boxes, instead of a predictor/simulator/very good guesser of your future thoughts.
So it’s not “too late”.
and ideally do so quite reflexively, without even thinking about it.
What does it matter if you’ll do it reflexively or after a great deal of thought? The problem doesn’t say that reflexive decisions are easier for Omega to guess than ones following long deliberation.
I’m modelling Omega as a predictor whose prediction function is based on the box-chooser’s current mental state (and presumably the current state of the chooser’s environment). Omega can simulate that state forward into the future and see what happens, but this is still a function of current state.
This is different from Omega being a pre-cog who can (somehow) see directly into the future, without any forward simulation etc.
Omega can simulate that state forward into the future and see what happens, but this is still a function of current state.
Yes. And what Omega discovers as a result of performing the simulation depends on what decision you’ll make, even if you encounter the problem for the first time, since a physical simulation doesn’t care about cognitive novelty. Assuming you’re digitally encoded, it’s a logically valid statement that if you one-box, then Omega’s simulation says that you one-boxed, and if you two-box, then Omega’s simulation says that you two-boxed. In this sense you control what’s in the box.
I think this is the disconnect… The chooser’s mental state when sampled by Omega causes what goes into the box. The chooser’s subsequent decisions don’t cause what went into the box, so they don’t “control” what goes into the box either. Control is a causal term...
The goal is to get more money, not necessarily to “causally control” money. I agree that a popular sense of “control” probably doesn’t include what I described, but the question of whether that word should include a new sense is a debate about definitions, not about the thought experiment (the disambiguating term around here is “acausal control”, though in the normal situations it includes causal control as a special case).
So long as we understand that I refer to the fact that it’s logically valid that if you one-box, then you get $1,000,000, and if you two-box, then you get only $1,000, there is no need to be concerned with that term. Since it’s true that if you two-box, then you only get $1,000, then by two-boxing you guarantee that it’s true that you two-box, ergo that you get $1000. Correspondingly, if you one-box, that guarantees that it’s true that you get $1,000,000.
(The subtlety is hidden in the fact that it might be false that you one-box, in which case it’s also true that your one-boxing implies that 18 is a prime. But if you actually one-box, that’s not the case! See this post for some discussion of this subtlety and a model that makes the situation somewhat clearer.)
You seem to be thinking about Omega as if he’s a mind-reader that can only be affected by your thoughts at the time he set the boxes, instead of a predictor/simulator/very good guesser of your future thoughts.
So it’s not “too late”.
What does it matter if you’ll do it reflexively or after a great deal of thought? The problem doesn’t say that reflexive decisions are easier for Omega to guess than ones following long deliberation.
I’m modelling Omega as a predictor whose prediction function is based on the box-chooser’s current mental state (and presumably the current state of the chooser’s environment). Omega can simulate that state forward into the future and see what happens, but this is still a function of current state.
This is different from Omega being a pre-cog who can (somehow) see directly into the future, without any forward simulation etc.
Yes. And what Omega discovers as a result of performing the simulation depends on what decision you’ll make, even if you encounter the problem for the first time, since a physical simulation doesn’t care about cognitive novelty. Assuming you’re digitally encoded, it’s a logically valid statement that if you one-box, then Omega’s simulation says that you one-boxed, and if you two-box, then Omega’s simulation says that you two-boxed. In this sense you control what’s in the box.
I think this is the disconnect… The chooser’s mental state when sampled by Omega causes what goes into the box. The chooser’s subsequent decisions don’t cause what went into the box, so they don’t “control” what goes into the box either. Control is a causal term...
The goal is to get more money, not necessarily to “causally control” money. I agree that a popular sense of “control” probably doesn’t include what I described, but the question of whether that word should include a new sense is a debate about definitions, not about the thought experiment (the disambiguating term around here is “acausal control”, though in the normal situations it includes causal control as a special case).
So long as we understand that I refer to the fact that it’s logically valid that if you one-box, then you get $1,000,000, and if you two-box, then you get only $1,000, there is no need to be concerned with that term. Since it’s true that if you two-box, then you only get $1,000, then by two-boxing you guarantee that it’s true that you two-box, ergo that you get $1000. Correspondingly, if you one-box, that guarantees that it’s true that you get $1,000,000.
(The subtlety is hidden in the fact that it might be false that you one-box, in which case it’s also true that your one-boxing implies that 18 is a prime. But if you actually one-box, that’s not the case! See this post for some discussion of this subtlety and a model that makes the situation somewhat clearer.)