This is a good explanation, but I am wary of “I didn’t understand it completely until two days ago.” You might think that was kind of silly if you look back at it after your next insight.
One thing I would like to see from a “complete understanding” is a way that a computationally bounded agent could implement an approximation of uncomputable UDT.
In counterfactual mugging problems, we, trying to use UDT, assign equal weights to heads-universe and tails-universe, because we don’t see any reason to expect one to have a higher Solomonoff prior than the other. So we are using our logical uncertainty about the Solomonoff prior rather than the Solomonoff prior directly, as ideal UDT would. Understanding how handle and systematically reduce this logical uncertainty would be useful.
If you possess some useful information about the universe you’re in, it’s magically taken into account by the choice of “information set”, because logically, your decision cannot affect the universes that contain copies of you with different states of knowledge, so they only add a constant term to the utility maximization.
I object to “magically”, but this is otherwise correct.
This is a good explanation, but I am wary of “I didn’t understand it completely until two days ago.” You might think that was kind of silly if you look back at it after your next insight.
One thing I would like to see from a “complete understanding” is a way that a computationally bounded agent could implement an approximation of uncomputable UDT.
In counterfactual mugging problems, we, trying to use UDT, assign equal weights to heads-universe and tails-universe, because we don’t see any reason to expect one to have a higher Solomonoff prior than the other. So we are using our logical uncertainty about the Solomonoff prior rather than the Solomonoff prior directly, as ideal UDT would. Understanding how handle and systematically reduce this logical uncertainty would be useful.
I object to “magically”, but this is otherwise correct.