I think it’s worse than that. If your argument is correct, the type of AI you are describing can’t plan because it can’t trust its future selves to follow through with the plan, even if doing so wouldn’t require commitment.
We can avoid this problem if Lucy performs an action once it is the first in a provably “good” sequence of actions. This would allow her to dodge the anvil if it interferes with her immediate plans, but not on the general grounds of “a universe with Lucy is a better universe since Lucy is doing good things”.
That’s not how UDT works in my understanding, even though I shall admit that I’m not an expert on the subject. Do you have a reference?
I don’t have a reference which discusses UDT and the Loebian obstacle together. You can find a description of the AFAIK “latest and greatest” UDT here. UDT considers proofs in a formal system. If this system suffers from the Loebian obstacle this will lead to the kind of problems I discuss here. In fact, I haven’t stated it explicitly but I think of Lucy as a UDT agent: she considers possible actions as logical counterfactuals and computes expected utility based on that.
We can avoid this problem if Lucy performs an action once it is the first in a provably “good” sequence of actions. This would allow her to dodge the anvil if it interferes with her immediate plans, but not on the general grounds of “a universe with Lucy is a better universe since Lucy is doing good things”.
I don’t have a reference which discusses UDT and the Loebian obstacle together. You can find a description of the AFAIK “latest and greatest” UDT here. UDT considers proofs in a formal system. If this system suffers from the Loebian obstacle this will lead to the kind of problems I discuss here. In fact, I haven’t stated it explicitly but I think of Lucy as a UDT agent: she considers possible actions as logical counterfactuals and computes expected utility based on that.