In the first approximation, the point is not that counterfactual mugging (or any other thought experiment) is actually defined in a certain way, but how it should be redefined in order to make it possible to navigate the issue. Unless Nomegas are outlawed, it’s not possible to do any calculations, therefore they are outlawed. Not because they were already explicitly outlawed or were colloquially understood to be outlawed.
But when we look at this more carefully, the assumption is not actually needed. If nonspecified Nomegas are allowed, the distribution of their possible incentives is all over the place, so they almost certainly cancel out in the expected utility of alternative precommitments. The real problem is not with introduction of Nomegas, but with managing to include the possibilities involving Omega in the calculations (as opposed to discarding them as particular Nomegas), taking into account the setting that’s not yet described at the point where precommitment should be made.
In counterfactual mugging, there is no physical time when the agent is in the state of knowledge where the relevant precommitment can be made (that’s the whole point). Instead, we can construct a hypothetical state of knowledge that has updated on the description of the thought experiment, but hasn’t updated on the fact of how the coin toss turned out. The agent never holds this state of knowledge as a description of all that’s actually known. Why retract knowledge of the coin toss, instead of retracting knowledge of the thought experiment? No reason, UDT strives to retract all knowledge and make a completely general precommitment to all eventualities. But in this setting, retracting knowledge of the coin toss while retaining knowledge of Omega creates a tractable decision problem, thus UDT that notices the possibility will make a precommitment. Similarly, it should precommit to not paying Omega in a situation where a Nomega punishing for paying up $100 to Omega (as described in this post) is known to operate. But only when it’s known to be there, not when it’s not known to be there.
Hmm perhaps I am still a little confused as to how UDT works. My understanding is that you don’t make your decisions based on the information you have observed, but instead, when you “boot up” your UDT, you consider all of the possible world states you may find yourself in and their various mesures, and then for each decision, “precommit” to making the one that maximizes your expected utility across all of the possible world states that this decision affects.
If this understanding is correct, then unless we have some sort of prior telling us, when we “boot up” UDT and thus before we interact with Omega, that Omega is more likley to exist than Nomega, then I don’t see how UDT could tell us to pay up.
I think it is somewhat likley that I am missing something here but I dont know what.
In the first approximation, the point is not that counterfactual mugging (or any other thought experiment) is actually defined in a certain way, but how it should be redefined in order to make it possible to navigate the issue. Unless Nomegas are outlawed, it’s not possible to do any calculations, therefore they are outlawed. Not because they were already explicitly outlawed or were colloquially understood to be outlawed.
But when we look at this more carefully, the assumption is not actually needed. If nonspecified Nomegas are allowed, the distribution of their possible incentives is all over the place, so they almost certainly cancel out in the expected utility of alternative precommitments. The real problem is not with introduction of Nomegas, but with managing to include the possibilities involving Omega in the calculations (as opposed to discarding them as particular Nomegas), taking into account the setting that’s not yet described at the point where precommitment should be made.
In counterfactual mugging, there is no physical time when the agent is in the state of knowledge where the relevant precommitment can be made (that’s the whole point). Instead, we can construct a hypothetical state of knowledge that has updated on the description of the thought experiment, but hasn’t updated on the fact of how the coin toss turned out. The agent never holds this state of knowledge as a description of all that’s actually known. Why retract knowledge of the coin toss, instead of retracting knowledge of the thought experiment? No reason, UDT strives to retract all knowledge and make a completely general precommitment to all eventualities. But in this setting, retracting knowledge of the coin toss while retaining knowledge of Omega creates a tractable decision problem, thus UDT that notices the possibility will make a precommitment. Similarly, it should precommit to not paying Omega in a situation where a Nomega punishing for paying up $100 to Omega (as described in this post) is known to operate. But only when it’s known to be there, not when it’s not known to be there.
Hmm perhaps I am still a little confused as to how UDT works. My understanding is that you don’t make your decisions based on the information you have observed, but instead, when you “boot up” your UDT, you consider all of the possible world states you may find yourself in and their various mesures, and then for each decision, “precommit” to making the one that maximizes your expected utility across all of the possible world states that this decision affects.
If this understanding is correct, then unless we have some sort of prior telling us, when we “boot up” UDT and thus before we interact with Omega, that Omega is more likley to exist than Nomega, then I don’t see how UDT could tell us to pay up.
I think it is somewhat likley that I am missing something here but I dont know what.