Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution.
No, it’s a UDT concern. What you’ve observed is merely one event among other possibilities, and you should maximize expected utility over all these possibilities.
I’m really not trying to be obtuse, but I still don’t understand. The other possibilities don’t exist. If my actions don’t affect the environment that other agents (including my future or other selves) experience, then I should maximize my utility. If, by construction, my actions have the potential of impacting other agents, then yes, I should take that under consideration, and if my algorithm before I see the money needs to decide to one-box in order for the money to be there in the first place, then that is also relevant.
I’m afraid you’ll need to be a little more explicit in describing why I shouldn’t two-box if I can be sure that doing so will not impact any other agents.
I probably don’t need to harp back on this, but the only other reason I can see is that Omega is infallible and wouldn’t have put the money in B if we were also going to take A. If we two-box, then there is a paradox; decision theories needn’t and can’t deal with paradoxes since they don’t exist. Either Omega is fallible or B is empty or we will one-box. If Omega is probabilistic, it is still in our best interest to decide to one-box before hand, but if we can get away with taking both, we should (it is more important to commit to one-boxing than it is to be able to break that commitment, but the logic still stands).
That is, if given the opportunity to permanently self-modify to exclusively one-box, I would. But if I appear out of nowhere, and Omega shows me the money but assures me I have already permanently self-modified to one-box, I will take both boxes if it turns out that Omega is wrong (and there are no other consequences to me or other agents).
If this problem is to be seen as equivalent to the counterfactual mugging then that’s evidence against the logic espoused by counterfactual mugging.
I’m far FAR from certain they’re equivalent, mind you—one point of difference is I can choose to commit to honor all favourable bets, even ones made without my specific consent, but there’s no point to committing to honoring my non-existence, as there’s no alternative me who would be able to honor it likewise.
At some point we must see lunacy for what it is. Achilles can outrun the turtle, if someone logically proves he can’t, then it’s the logic used that’s wrong, not the reality.
No, it’s a UDT concern. What you’ve observed is merely one event among other possibilities, and you should maximize expected utility over all these possibilities.
I’m really not trying to be obtuse, but I still don’t understand. The other possibilities don’t exist. If my actions don’t affect the environment that other agents (including my future or other selves) experience, then I should maximize my utility. If, by construction, my actions have the potential of impacting other agents, then yes, I should take that under consideration, and if my algorithm before I see the money needs to decide to one-box in order for the money to be there in the first place, then that is also relevant.
I’m afraid you’ll need to be a little more explicit in describing why I shouldn’t two-box if I can be sure that doing so will not impact any other agents.
I probably don’t need to harp back on this, but the only other reason I can see is that Omega is infallible and wouldn’t have put the money in B if we were also going to take A. If we two-box, then there is a paradox; decision theories needn’t and can’t deal with paradoxes since they don’t exist. Either Omega is fallible or B is empty or we will one-box. If Omega is probabilistic, it is still in our best interest to decide to one-box before hand, but if we can get away with taking both, we should (it is more important to commit to one-boxing than it is to be able to break that commitment, but the logic still stands).
That is, if given the opportunity to permanently self-modify to exclusively one-box, I would. But if I appear out of nowhere, and Omega shows me the money but assures me I have already permanently self-modified to one-box, I will take both boxes if it turns out that Omega is wrong (and there are no other consequences to me or other agents).
Doesn’t matter. See Counterfactual Mugging.
If this problem is to be seen as equivalent to the counterfactual mugging then that’s evidence against the logic espoused by counterfactual mugging.
I’m far FAR from certain they’re equivalent, mind you—one point of difference is I can choose to commit to honor all favourable bets, even ones made without my specific consent, but there’s no point to committing to honoring my non-existence, as there’s no alternative me who would be able to honor it likewise.
At some point we must see lunacy for what it is. Achilles can outrun the turtle, if someone logically proves he can’t, then it’s the logic used that’s wrong, not the reality.