AIXI: The external world is a Turing machine that receives our actions as input and produces our sensory impressions as output. Our prior belief about this Turing machine should be that it’s simple, i.e. the Solomonoff prior
“The embedded prior”: The “entire” world is a sort of Turing machine, which we happen to be one component of in some sense. Our prior for this Turing machine should be that it’s simple (again, the Solomonoff prior), but we have to condition on the observation that it’s complicated enough to contain observers (“Descartes’ update”). (This is essentially Naturalized induction
I think of the difference between these as “solipsism”—AIXI gives its own existence a distinguished role in reality.
Importantly, the laws of physics seem fairly complicated in an absolute sense—clearly they require tens[1] or hundreds of bits to specify. This is evidence against solipsism, because on the solipsistic prior, we expect to interact with a largely empty universe. But they don’t seem much more complicated than necessary for a universe that contains at least one observer, since the minimal source code for an observer is probably also fairly long.
More evidence against solipsism:
The laws of physics don’t seem to privilege my frame of reference. This is a pretty astounding coincidence on the solipsistic viewpoint—it means we randomly picked a universe which simulates some observer-independent laws of physics, then picks out a specific point inside it, depending on some fairly complex parameters, to show me.
When I look out into the universe external to my mind, one of the things I find there is my brain, which really seems to contain a copy of my mind. This is another pretty startling coincidence on the solipsistic prior, that the external universe being run happens to contain this kind of representation of the Cartesian observe
Compare two views of “the universal prior”
AIXI: The external world is a Turing machine that receives our actions as input and produces our sensory impressions as output. Our prior belief about this Turing machine should be that it’s simple, i.e. the Solomonoff prior
“The embedded prior”: The “entire” world is a sort of Turing machine, which we happen to be one component of in some sense. Our prior for this Turing machine should be that it’s simple (again, the Solomonoff prior), but we have to condition on the observation that it’s complicated enough to contain observers (“Descartes’ update”). (This is essentially Naturalized induction
I think of the difference between these as “solipsism”—AIXI gives its own existence a distinguished role in reality.
Importantly, the laws of physics seem fairly complicated in an absolute sense—clearly they require tens[1] or hundreds of bits to specify. This is evidence against solipsism, because on the solipsistic prior, we expect to interact with a largely empty universe. But they don’t seem much more complicated than necessary for a universe that contains at least one observer, since the minimal source code for an observer is probably also fairly long.
More evidence against solipsism:
The laws of physics don’t seem to privilege my frame of reference. This is a pretty astounding coincidence on the solipsistic viewpoint—it means we randomly picked a universe which simulates some observer-independent laws of physics, then picks out a specific point inside it, depending on some fairly complex parameters, to show me.
When I look out into the universe external to my mind, one of the things I find there is my brain, which really seems to contain a copy of my mind. This is another pretty startling coincidence on the solipsistic prior, that the external universe being run happens to contain this kind of representation of the Cartesian observe
This is obviously a very small number but I’m trying to be maximally conservative here.
Why wouldn’t they be the same? Are you saying AIXI doesn’t ask ‘where did I come from?’
Yes, that’s right. It’s the same basic issue that leads to the Anvil Problem