Abstractions I’m referring to are not intended to be limited to simpler agents that a human can predict, there are precise abstractions of you that can only be predicted by superintelligences, as well as imprecise ones that can be predicted by you and other humans, things like theoretically ascribed reputation about specific situations. This almost touches on what’s needed for Newcomb’s Problem.
Obviously Omega can exist for uploads and arbitrarily complex AGIs formulated as abstract programs (these things can run on computers, so Omega could just use a similar computer). An embedded agent modeling the universe including itself is not a realadditional difficulty in principle, even if we want to model the universe precisely (if the world is compressible, its shorter description can fit into a smaller agent embedded in the same world, and with quining we can avoid contradictions). Though almost certainly that’s not possible to do in our world. But in any case, that’s not because self-reference causes trouble.
Abstractions I’m referring to are not intended to be limited to simpler agents that a human can predict, there are precise abstractions of you that can only be predicted by superintelligences, as well as imprecise ones that can be predicted by you and other humans, things like theoretically ascribed reputation about specific situations. This almost touches on what’s needed for Newcomb’s Problem.
Obviously Omega can exist for uploads and arbitrarily complex AGIs formulated as abstract programs (these things can run on computers, so Omega could just use a similar computer). An embedded agent modeling the universe including itself is not a real additional difficulty in principle, even if we want to model the universe precisely (if the world is compressible, its shorter description can fit into a smaller agent embedded in the same world, and with quining we can avoid contradictions). Though almost certainly that’s not possible to do in our world. But in any case, that’s not because self-reference causes trouble.