“Well, General Relativity includes solutions that have closed timelike curves, and I certainly am not in any position to rule out the possibility of communication by such. So I have no actual reason to rule out the possibility that which strategy I choose will, after I make my decision, be communicated to Omega in my past and then the boxes filled accordingly. So I better one-box in order to choose the closed timelike loop where Omega fills the box.”
I understand, looking at Wikipedia, that in Nozick’s formulation he simply declared that the box won’t be filled based on the actual decision. Fine. How would he go about proving that to someone actually faced with the scenario? Rational people do not risk a million dollars based on an unprovable statement by a philosopher. Same with claims that, for example, Omega didn’t set up the boxes so that two-boxing actually results in the annihilation of the contents of box B. Or that Omega doesn’t teleport the money in B somehow after the decider makes the decision to one-box. Those declarations may have a truth value of 1 for purposes of a person outside observing the scenario, but unless empirically testable within the scenario, cannot be valued as approximating 1 by the person making the decision.
Every “given” that the decision-maker can’t verify is a “given” that is not usable for making the decision. The whole argument for two-boxing depends on a boundary violation; that the knowledge known by the reader but which cannot be known to the character in the scenario can somehow be used by the character in the scenario to make a decision.
Upon reading this, I immediately went,
“Well, General Relativity includes solutions that have closed timelike curves, and I certainly am not in any position to rule out the possibility of communication by such. So I have no actual reason to rule out the possibility that which strategy I choose will, after I make my decision, be communicated to Omega in my past and then the boxes filled accordingly. So I better one-box in order to choose the closed timelike loop where Omega fills the box.”
I understand, looking at Wikipedia, that in Nozick’s formulation he simply declared that the box won’t be filled based on the actual decision. Fine. How would he go about proving that to someone actually faced with the scenario? Rational people do not risk a million dollars based on an unprovable statement by a philosopher. Same with claims that, for example, Omega didn’t set up the boxes so that two-boxing actually results in the annihilation of the contents of box B. Or that Omega doesn’t teleport the money in B somehow after the decider makes the decision to one-box. Those declarations may have a truth value of 1 for purposes of a person outside observing the scenario, but unless empirically testable within the scenario, cannot be valued as approximating 1 by the person making the decision.
Every “given” that the decision-maker can’t verify is a “given” that is not usable for making the decision. The whole argument for two-boxing depends on a boundary violation; that the knowledge known by the reader but which cannot be known to the character in the scenario can somehow be used by the character in the scenario to make a decision.