Others in this thread have pointed this out, but I will try to articulate my point a little more clearly.
Decision theories that require us to two-box do so because we have incomplete information about the environment. We might be in a universe where Omega thinks that we’ll one-box; if we think that Omega is nearly infallible, we increase this probability by choosing to one-box. Note that probability is about our own information, not about the universe. We’re not modifying the universe, we’re refining our estimates.
If the box is transparent, and we can see the money, we simply don’t care what Omega says. As long as we trust that the bottom won’t fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.
Likewise, our information about whether we exist is not incomplete; we can’t change it by choosing to go against the genes that got us here.
For situations where our knowledge is incomplete, we actually can derive information (about what kind of a world we inhabit) from our desires, but it is evidence, not certainty, and certainly not acasual negotiation. We can easily have evidence that outweighs this relatively meager data.
If the box is transparent, and we can see the money, we simply don’t care what Omega says. As long as we trust that the bottom won’t fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.
Do you pay the money in Parfit’s Hitchhiker? Do you drink Kavka’s toxin?
Good question, but permit me to contrast the difference.
You are the hitchhiker; recognizing the peril of your situation, you wisely choose to permanently self-modify yourself to be an agent that will pay the money. Of course, you then pay the money afterward, because that’s what kind of an agent you are.
You appear, out of nowhere, and seem to be a hitchhiker that was just brought into town. Omega informs you that you of the above situation. If Omega is telling the truth, you have no choice whether to pay or not, but if you decide not to pay, you cannot undo the fact that Paul picked you up—apparently Omega was wrong.
In the first, you have incomplete information about what will happen. By self-modifying to determine which world you will be in, you resolve that. In the second, you already got to town, and no longer need to appease Paul.
Kavka’s toxin is a problem with a somewhat more ambiguous setup, but the same reasoning will apply to the version I think you are talking about.
If the box is transparent, and we can see the money, we simply don’t care what Omega says. As long as we trust that the bottom won’t fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.
In transparent Newcomb’s, you’re uncertain about probability of what you’ve observed, even if not about its utility. You need Omega to make this probability what you prefer.
Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution. The expected long-run frequency distribution of seeing that money is still unknown, but I don’t expect this experiment to be repeated, so that’s an abstract concern.
Again, if I have reason to believe that (with reasonable probability) I’m being simulated and won’t get to experience the utility of that money (unless I one-box), my decision matrix changes, but then I’m back to having incomplete information.
Likewise, perhaps pre-committing to one-box before you see the money makes sense given the usual setup. But if you can break your commitment once the money is already there, that’s the right choice (even though it means Omega failed). If you can’t, then too bad, but can’t != shouldn’t.
Under what circumstances would you one-box if you were certain that this was the only trial you would experience, the money was visible under both boxes, and your decision will not impact the amount of money available to any other agent in any other trial?
Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution.
No, it’s a UDT concern. What you’ve observed is merely one event among other possibilities, and you should maximize expected utility over all these possibilities.
I’m really not trying to be obtuse, but I still don’t understand. The other possibilities don’t exist. If my actions don’t affect the environment that other agents (including my future or other selves) experience, then I should maximize my utility. If, by construction, my actions have the potential of impacting other agents, then yes, I should take that under consideration, and if my algorithm before I see the money needs to decide to one-box in order for the money to be there in the first place, then that is also relevant.
I’m afraid you’ll need to be a little more explicit in describing why I shouldn’t two-box if I can be sure that doing so will not impact any other agents.
I probably don’t need to harp back on this, but the only other reason I can see is that Omega is infallible and wouldn’t have put the money in B if we were also going to take A. If we two-box, then there is a paradox; decision theories needn’t and can’t deal with paradoxes since they don’t exist. Either Omega is fallible or B is empty or we will one-box. If Omega is probabilistic, it is still in our best interest to decide to one-box before hand, but if we can get away with taking both, we should (it is more important to commit to one-boxing than it is to be able to break that commitment, but the logic still stands).
That is, if given the opportunity to permanently self-modify to exclusively one-box, I would. But if I appear out of nowhere, and Omega shows me the money but assures me I have already permanently self-modified to one-box, I will take both boxes if it turns out that Omega is wrong (and there are no other consequences to me or other agents).
If this problem is to be seen as equivalent to the counterfactual mugging then that’s evidence against the logic espoused by counterfactual mugging.
I’m far FAR from certain they’re equivalent, mind you—one point of difference is I can choose to commit to honor all favourable bets, even ones made without my specific consent, but there’s no point to committing to honoring my non-existence, as there’s no alternative me who would be able to honor it likewise.
At some point we must see lunacy for what it is. Achilles can outrun the turtle, if someone logically proves he can’t, then it’s the logic used that’s wrong, not the reality.
Others in this thread have pointed this out, but I will try to articulate my point a little more clearly.
Decision theories that require us to two-box do so because we have incomplete information about the environment. We might be in a universe where Omega thinks that we’ll one-box; if we think that Omega is nearly infallible, we increase this probability by choosing to one-box. Note that probability is about our own information, not about the universe. We’re not modifying the universe, we’re refining our estimates.
If the box is transparent, and we can see the money, we simply don’t care what Omega says. As long as we trust that the bottom won’t fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.
Likewise, our information about whether we exist is not incomplete; we can’t change it by choosing to go against the genes that got us here.
For situations where our knowledge is incomplete, we actually can derive information (about what kind of a world we inhabit) from our desires, but it is evidence, not certainty, and certainly not acasual negotiation. We can easily have evidence that outweighs this relatively meager data.
Do you pay the money in Parfit’s Hitchhiker? Do you drink Kavka’s toxin?
Good question, but permit me to contrast the difference.
You are the hitchhiker; recognizing the peril of your situation, you wisely choose to permanently self-modify yourself to be an agent that will pay the money. Of course, you then pay the money afterward, because that’s what kind of an agent you are.
You appear, out of nowhere, and seem to be a hitchhiker that was just brought into town. Omega informs you that you of the above situation. If Omega is telling the truth, you have no choice whether to pay or not, but if you decide not to pay, you cannot undo the fact that Paul picked you up—apparently Omega was wrong.
In the first, you have incomplete information about what will happen. By self-modifying to determine which world you will be in, you resolve that. In the second, you already got to town, and no longer need to appease Paul.
Kavka’s toxin is a problem with a somewhat more ambiguous setup, but the same reasoning will apply to the version I think you are talking about.
In transparent Newcomb’s, you’re uncertain about probability of what you’ve observed, even if not about its utility. You need Omega to make this probability what you prefer.
Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution. The expected long-run frequency distribution of seeing that money is still unknown, but I don’t expect this experiment to be repeated, so that’s an abstract concern.
Again, if I have reason to believe that (with reasonable probability) I’m being simulated and won’t get to experience the utility of that money (unless I one-box), my decision matrix changes, but then I’m back to having incomplete information.
Likewise, perhaps pre-committing to one-box before you see the money makes sense given the usual setup. But if you can break your commitment once the money is already there, that’s the right choice (even though it means Omega failed). If you can’t, then too bad, but can’t != shouldn’t.
Under what circumstances would you one-box if you were certain that this was the only trial you would experience, the money was visible under both boxes, and your decision will not impact the amount of money available to any other agent in any other trial?
No, it’s a UDT concern. What you’ve observed is merely one event among other possibilities, and you should maximize expected utility over all these possibilities.
I’m really not trying to be obtuse, but I still don’t understand. The other possibilities don’t exist. If my actions don’t affect the environment that other agents (including my future or other selves) experience, then I should maximize my utility. If, by construction, my actions have the potential of impacting other agents, then yes, I should take that under consideration, and if my algorithm before I see the money needs to decide to one-box in order for the money to be there in the first place, then that is also relevant.
I’m afraid you’ll need to be a little more explicit in describing why I shouldn’t two-box if I can be sure that doing so will not impact any other agents.
I probably don’t need to harp back on this, but the only other reason I can see is that Omega is infallible and wouldn’t have put the money in B if we were also going to take A. If we two-box, then there is a paradox; decision theories needn’t and can’t deal with paradoxes since they don’t exist. Either Omega is fallible or B is empty or we will one-box. If Omega is probabilistic, it is still in our best interest to decide to one-box before hand, but if we can get away with taking both, we should (it is more important to commit to one-boxing than it is to be able to break that commitment, but the logic still stands).
That is, if given the opportunity to permanently self-modify to exclusively one-box, I would. But if I appear out of nowhere, and Omega shows me the money but assures me I have already permanently self-modified to one-box, I will take both boxes if it turns out that Omega is wrong (and there are no other consequences to me or other agents).
Doesn’t matter. See Counterfactual Mugging.
If this problem is to be seen as equivalent to the counterfactual mugging then that’s evidence against the logic espoused by counterfactual mugging.
I’m far FAR from certain they’re equivalent, mind you—one point of difference is I can choose to commit to honor all favourable bets, even ones made without my specific consent, but there’s no point to committing to honoring my non-existence, as there’s no alternative me who would be able to honor it likewise.
At some point we must see lunacy for what it is. Achilles can outrun the turtle, if someone logically proves he can’t, then it’s the logic used that’s wrong, not the reality.