according to one of our most accepted theories these days (quantum mechanics) the inherent randomness of our universe
And yet I’m able to perfectly predict that 2+2 is 4. The agents being predicted are abstractions, like behavior of computer programs determined by source code. It doesn’t matter for reasoning about an abstraction that its instances of practical importance get to run within physics.
In most formulations I’ve seen the word “you” is used, implying that the reader is the agent being predicted. I’ve never seen them limited to trivial agents that a human-level intelligence can predict.
It’s an open question whether it’s possible for Omega to exist, who can predict a truly complex intelligence, which is itself modeling the universe including itself.
Abstractions I’m referring to are not intended to be limited to simpler agents that a human can predict, there are precise abstractions of you that can only be predicted by superintelligences, as well as imprecise ones that can be predicted by you and other humans, things like theoretically ascribed reputation about specific situations. This almost touches on what’s needed for Newcomb’s Problem.
Obviously Omega can exist for uploads and arbitrarily complex AGIs formulated as abstract programs (these things can run on computers, so Omega could just use a similar computer). An embedded agent modeling the universe including itself is not a realadditional difficulty in principle, even if we want to model the universe precisely (if the world is compressible, its shorter description can fit into a smaller agent embedded in the same world, and with quining we can avoid contradictions). Though almost certainly that’s not possible to do in our world. But in any case, that’s not because self-reference causes trouble.
And yet I’m able to perfectly predict that 2+2 is 4. The agents being predicted are abstractions, like behavior of computer programs determined by source code. It doesn’t matter for reasoning about an abstraction that its instances of practical importance get to run within physics.
In most formulations I’ve seen the word “you” is used, implying that the reader is the agent being predicted. I’ve never seen them limited to trivial agents that a human-level intelligence can predict.
It’s an open question whether it’s possible for Omega to exist, who can predict a truly complex intelligence, which is itself modeling the universe including itself.
Abstractions I’m referring to are not intended to be limited to simpler agents that a human can predict, there are precise abstractions of you that can only be predicted by superintelligences, as well as imprecise ones that can be predicted by you and other humans, things like theoretically ascribed reputation about specific situations. This almost touches on what’s needed for Newcomb’s Problem.
Obviously Omega can exist for uploads and arbitrarily complex AGIs formulated as abstract programs (these things can run on computers, so Omega could just use a similar computer). An embedded agent modeling the universe including itself is not a real additional difficulty in principle, even if we want to model the universe precisely (if the world is compressible, its shorter description can fit into a smaller agent embedded in the same world, and with quining we can avoid contradictions). Though almost certainly that’s not possible to do in our world. But in any case, that’s not because self-reference causes trouble.