I get the impression that your models of decision theory implicitly assume that agents have logical omniscience. Logical uncertainty contradicts that, so any theory with both will end up confused and shuffling round an inconsistency.
I think to solve this you’re going to have to explicitly model an agent with bounded computing power.
This problem is even trickier, you have to explicitly model an agent with unlimited computing power that momentarily adopts the preferences of a bounded agent.
I get the impression that your models of decision theory implicitly assume that agents have logical omniscience. Logical uncertainty contradicts that, so any theory with both will end up confused and shuffling round an inconsistency.
I think to solve this you’re going to have to explicitly model an agent with bounded computing power.
This problem is even trickier, you have to explicitly model an agent with unlimited computing power that momentarily adopts the preferences of a bounded agent.