Omega could tell you “Either I am simulating you to gauge your response, or this is reality and I predicted your response”—and the problem would be essentially the same.
In that case, even a causal decision theorist (who cared about their copies) would get the right answer.
In that case, even a causal decision theorist (who cared about their copies) would get the right answer.
Presumably a CDT-agent reasons as follows: “There’s 50% chance I’m the simulation in which case my decision causally influences the content of the first box, and 50% chance I’m real in which case my decision causally influences whether I get one or both boxes.” There are two problems with this kind of reasoning (which have been pointed out before).
A CDT-agent needs a “finding copies of me in the world” module. It’s unclear how to design this, even in principle. (Why does a simulation count as a copy, but not a prediction? What about imprecise or cryptographically obscured simulations? Do they count?)
This kind of “anthropic reasoning” can be problematic due to lack of independence between copies. See Absent-Minded Driver for an example.
UDT tries to sidestep the problem. Instead of having a module that must decide in a binary way whether something is or isn’t a copy of itself, it instead uses its “math intuition module” to determine how much “logical correlation” exists between something and itself (i.e., computes the conditional probabilities of various outcomes depending on its decisions). The idea is that hopefully once we understand how logical uncertainty is supposed to work, this will just work automatically without having to have “extra code” for figuring out what things in the world count as copies.
Ok—but I don’t see that as being any better than CDT. In both cases we need a working module (that we don’t have) to make the theory work.
My argument is that every agent needs a solution to logical uncertainty anyway, otherwise it would be unable to, for example, decide whether or not to spend resources looking for a polynomial time solution to 3-SAT (or can only decide things like this in a haphazard way). So with CDT, you would need an extra module that we don’t have.
In that case, even a causal decision theorist (who cared about their copies) would get the right answer.
Presumably a CDT-agent reasons as follows: “There’s 50% chance I’m the simulation in which case my decision causally influences the content of the first box, and 50% chance I’m real in which case my decision causally influences whether I get one or both boxes.” There are two problems with this kind of reasoning (which have been pointed out before).
A CDT-agent needs a “finding copies of me in the world” module. It’s unclear how to design this, even in principle. (Why does a simulation count as a copy, but not a prediction? What about imprecise or cryptographically obscured simulations? Do they count?)
This kind of “anthropic reasoning” can be problematic due to lack of independence between copies. See Absent-Minded Driver for an example.
It seems to me that the case for UDT over CDT gets much stronger if you consider all of the problems that motivated it instead of just one.
Has UDT now solved that problem?
UDT tries to sidestep the problem. Instead of having a module that must decide in a binary way whether something is or isn’t a copy of itself, it instead uses its “math intuition module” to determine how much “logical correlation” exists between something and itself (i.e., computes the conditional probabilities of various outcomes depending on its decisions). The idea is that hopefully once we understand how logical uncertainty is supposed to work, this will just work automatically without having to have “extra code” for figuring out what things in the world count as copies.
Ok—but I don’t see that as being any better than CDT. In both cases we need a working module (that we don’t have) to make the theory work.
UDT is better than CDT because it allows correlations with “non-copies”; I think we should focus on that, not on CDT’s lack of copy-finding modules.
My argument is that every agent needs a solution to logical uncertainty anyway, otherwise it would be unable to, for example, decide whether or not to spend resources looking for a polynomial time solution to 3-SAT (or can only decide things like this in a haphazard way). So with CDT, you would need an extra module that we don’t have.