Is there a difference between utility indifference and false beliefs? The Von Neumann–Morgenstern utility theorem works entirely based on expected value, and does not differentiate between high probability and high magnitude of value.
False beliefs might be contagious (spreading to other beliefs), and lead to logic problems with thing like P(A)=P(B)=1 and P(A and B)<1 (or when impossible things happen).
If it believes something is impossible, then when it sees proof to the contrary it assumes there’s something wrong with its sensory input. If it has utility indifference, then if it sees that the universe is one it doesn’t care about it acts based on the tiny chance that there’s something wrong with its sensory input. I don’t see a difference. If you use Solomonoff induction and set a prior to zero, everything will work fine. Even a superintelligent AI won’t be able to use Solomonoff induction, and it realistically will have Bayes’ theorem not quite accurately describe its beliefs, but that’s true regardless of if it has zero probability for something.
Is there a difference between utility indifference and false beliefs? The Von Neumann–Morgenstern utility theorem works entirely based on expected value, and does not differentiate between high probability and high magnitude of value.
False beliefs might be contagious (spreading to other beliefs), and lead to logic problems with thing like P(A)=P(B)=1 and P(A and B)<1 (or when impossible things happen).
If it believes something is impossible, then when it sees proof to the contrary it assumes there’s something wrong with its sensory input. If it has utility indifference, then if it sees that the universe is one it doesn’t care about it acts based on the tiny chance that there’s something wrong with its sensory input. I don’t see a difference. If you use Solomonoff induction and set a prior to zero, everything will work fine. Even a superintelligent AI won’t be able to use Solomonoff induction, and it realistically will have Bayes’ theorem not quite accurately describe its beliefs, but that’s true regardless of if it has zero probability for something.
That’s not how utility indifference works. I’d recommend skimming the paper ( http://www.fhi.ox.ac.uk/utility-indifference.pdf ), then ask me if you still have questions.