No. The method’s output depends on its input, which by hypothesis is a specification of the situation that includes all the information necessary to determine the output of the individual’s decision algorithm. Hence the decision algorithm is a causal antecedant of the contents of the boxes.
I mean, the actual token, the action, the choice, the act of my choosing does not determine the contents. It’s Omega’s belief (however obtained) that this algorithm is such-and-such that lead it to fill the boxes accordingly.
That is right—the choice does not determine the contents. But the choice is not as independent as common intuition suggests. Omega’s belief and your choice share common causes. Human decisions are caused—they don’t spontaneously spring from nowhere, causally unconnected to the rest of the universe—even if that’s how it sometimes feels from the inside. The situational state, and the state of your brain going into the situation, determine the decision that your brain will ultimately produce. Omega is presumed to know enough about these prior states, and how you function, to know what you will decide. Omega may well know better than you do what decision you will reach! It’s important to realize that this is not that far-fetched. Heck, that very thing sometimes happens between people who know each other very well, without the benefit of one of them being Omega! Your objection supposes that somehow, everything in the world, including your brain, could be configured so as to lead to a one-box decision; but then at the last moment, you could somehow pull a head-fake and just spontaneously spawn a trancendent decision-process that decides to two box. It might feel to you intuitively that humans can do this, but as far as we know they do not in fact possess that degree of freedom.
To summarize, Omega’s prediction and your decision have common, ancestor causes. Human decision-making feels transcendent from the inside, but is not literally so. Resist thinking of first-person choosing as some kind of prime mover.
What do you mean?
It could have created and run a copy, for instance, but anyhow, there would be no causal link.
That’s probably the whole point of the 2-Boxer-majority.
I can see a rationale behind one-boxing, and it might even be a standoff, but why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.
I mean that as part of the specification of the problem, Omega has all the information necessary to determine what you will choose before you know yourself. There are causal arrows that descend from the situation specified by that information to (i) your choice, and (ii) the contents of the box.
why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.
You stated that “the game is rigged”. The reasoning behind 2-boxing ignores that fact. In common parlance, a rigged game is unwinnable, but this game is knowably winnable. So go ahead and win without worrying about whether the choice has the label “rational” attached!
Yeah, I gotta give you both props for sticking it out that long. The annoying part for me is that I see both sides just fine and can see where the conceptual miss keeps happening.
Alas, that doesn’t mean I can clarify anything better than you did.
Is it not rather Omega’s undisclosed method that determines the contens? That seems to make all the difference.
No. The method’s output depends on its input, which by hypothesis is a specification of the situation that includes all the information necessary to determine the output of the individual’s decision algorithm. Hence the decision algorithm is a causal antecedant of the contents of the boxes.
I mean, the actual token, the action, the choice, the act of my choosing does not determine the contents. It’s Omega’s belief (however obtained) that this algorithm is such-and-such that lead it to fill the boxes accordingly.
That is right—the choice does not determine the contents. But the choice is not as independent as common intuition suggests. Omega’s belief and your choice share common causes. Human decisions are caused—they don’t spontaneously spring from nowhere, causally unconnected to the rest of the universe—even if that’s how it sometimes feels from the inside. The situational state, and the state of your brain going into the situation, determine the decision that your brain will ultimately produce. Omega is presumed to know enough about these prior states, and how you function, to know what you will decide. Omega may well know better than you do what decision you will reach! It’s important to realize that this is not that far-fetched. Heck, that very thing sometimes happens between people who know each other very well, without the benefit of one of them being Omega! Your objection supposes that somehow, everything in the world, including your brain, could be configured so as to lead to a one-box decision; but then at the last moment, you could somehow pull a head-fake and just spontaneously spawn a trancendent decision-process that decides to two box. It might feel to you intuitively that humans can do this, but as far as we know they do not in fact possess that degree of freedom.
To summarize, Omega’s prediction and your decision have common, ancestor causes. Human decision-making feels transcendent from the inside, but is not literally so. Resist thinking of first-person choosing as some kind of prime mover.
Yes, that’s true. Now chase “however obtained” up a level—after all, you have all the information necessary to do so.
What do you mean? It could have created and run a copy, for instance, but anyhow, there would be no causal link. That’s probably the whole point of the 2-Boxer-majority.
I can see a rationale behind one-boxing, and it might even be a standoff, but why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.
I mean that as part of the specification of the problem, Omega has all the information necessary to determine what you will choose before you know yourself. There are causal arrows that descend from the situation specified by that information to (i) your choice, and (ii) the contents of the box.
You stated that “the game is rigged”. The reasoning behind 2-boxing ignores that fact. In common parlance, a rigged game is unwinnable, but this game is knowably winnable. So go ahead and win without worrying about whether the choice has the label “rational” attached!
Sadly, we seem to make no progress in any direction. Thanks for trying.
Likewise.
Yeah, I gotta give you both props for sticking it out that long. The annoying part for me is that I see both sides just fine and can see where the conceptual miss keeps happening.
Alas, that doesn’t mean I can clarify anything better than you did.