Ah! so you’re defining “this” as exact bitwise match
That’s not the problem. The problem is that you’ve already updated your probability distribution, so you just don’t care about the cases where the binary digit came up 0 instead of 1 - not because your utility function isn’t over them, but because they have negligible probability.
the number has been chosen to be prime or composite depending on whether the money is in the opaque box
(First read that variant in Martin Gardner.) The epistemically intuitive answer is “Once I choose to take one box, I will be able to infer that this number has always been prime”. If I wanted to walk through TDT doing this, I’d draw a causal graph with Omega’s choice descending from my decision diagonal, and sending a prior-message in turn to the parameters of a child node that runs a primality test over numbers and picked this number because it passed (failed), so that—knowing / having decided your logical choice—seeing this number becomes evidence that its primality test came up positive.
In terms of logical control, you don’t control whether the primality test comes up positive on this fixed number, but you do control whether this number got onto the box-label by passing a primality test or a compositeness test.
(I don’t remember where I first read that variant, but Martin Gardner sounds likely.) Yes, I agree with your analysis of it—but that doesn’t contradict the assertion that you can solve these problems by extending your utility function across parallel versions of you who received slightly different sensory data. I will conjecture that this turns out to be the only elegant solution.
Sorry, that doesn’t make any sense. It’s a probability distribution that’s the issue, not a utility function. UDT tosses out the probability distribution entirely. TDT still uses it and therefore fails on Counterfactual Mugging.
It’s precisely the assertion that all such problems have to be solved at the probability distribution level that I’m disputing. I’ll go so far as to make a testable prediction: it will be eventually acknowledged that the notion of a purely selfish agent is a good approximation that nonetheless cannot handle such extreme cases. If you can come up with a theory that handles them all without touching the utility function, I will be interested in seeing it!
That’s not the problem. The problem is that you’ve already updated your probability distribution, so you just don’t care about the cases where the binary digit came up 0 instead of 1 - not because your utility function isn’t over them, but because they have negligible probability.
(First read that variant in Martin Gardner.) The epistemically intuitive answer is “Once I choose to take one box, I will be able to infer that this number has always been prime”. If I wanted to walk through TDT doing this, I’d draw a causal graph with Omega’s choice descending from my decision diagonal, and sending a prior-message in turn to the parameters of a child node that runs a primality test over numbers and picked this number because it passed (failed), so that—knowing / having decided your logical choice—seeing this number becomes evidence that its primality test came up positive.
In terms of logical control, you don’t control whether the primality test comes up positive on this fixed number, but you do control whether this number got onto the box-label by passing a primality test or a compositeness test.
(I don’t remember where I first read that variant, but Martin Gardner sounds likely.) Yes, I agree with your analysis of it—but that doesn’t contradict the assertion that you can solve these problems by extending your utility function across parallel versions of you who received slightly different sensory data. I will conjecture that this turns out to be the only elegant solution.
Sorry, that doesn’t make any sense. It’s a probability distribution that’s the issue, not a utility function. UDT tosses out the probability distribution entirely. TDT still uses it and therefore fails on Counterfactual Mugging.
It’s precisely the assertion that all such problems have to be solved at the probability distribution level that I’m disputing. I’ll go so far as to make a testable prediction: it will be eventually acknowledged that the notion of a purely selfish agent is a good approximation that nonetheless cannot handle such extreme cases. If you can come up with a theory that handles them all without touching the utility function, I will be interested in seeing it!
None of the decision theories in question assume a purely selfish agent.
No, but most of the example problems do.