AIXI is incapable of understanding the concept of copies of itself. In fact, it’s incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
I don’t think he’s published it yet; he did it in an internal FHI meeting. It’s basically an extension of the fact that an uncomputable algorithm looking only at programmable models can’t find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.
Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.
These esoteric problems apparentlly need rephrasing in more practical terms—but then they won’t be problems with AIXI any more.
If it says to maximise revenue across all its copies in the multiverse, it should pay.
If there is no multiverse and the coin flip is simply deterministic—perhaps based of the parity of the quadrillionth digit of pi—there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so. AIXI, however, is designed to rule out possibilities once they contradict its observations, so it does not act correctly here.
If there is no multiverse and the coin flip is simply deterministic—perhaps based of the parity of the quadrillionth digit of pi—there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so.
That seems to be a pretty counter-factual premise, though. There’s pretty good evidence for a multiverse, and you could hack AIXI to do the “right” thing—by giving it a “multiverse-aware” environment and utility function.
“No multiverse” wasn’t the best way to put it. Even in a multiverse, there is only one value of the quadrillionth digit of pi, so modifying AIXI to account for the multiverse does not provide a solution here, since we get the same result as in a single universe.
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where some agent is presented with a problem involving the quadrillionth digit of pi in all the universes.
Once AIXI is told that the coin flip will be over the quadrillionth digit of pi, all other scenarios contradict its observations, so they are ruled out and the utility conditional on them stops being taken into account.
Possibly. If that turns out to be a flaw, then AIXI may need more “adjustment” than just expanding its environment and utility function to include the mulltiverse.
Uncomputable AIXI being “out-thought” by uncomputable Omega now seems like a fairly hypothetical situation in the first place. I don’t pretend to know what would happen—or even if the question is really meaningful.
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where it is the quadrillionth digit of pi in all the universes.
It makes sense, but the conclusion apparentlly depends on how AIXI’s utility function is written. Assuming it knows Omega is trustworthy...
If AIXI’s utility function says to maximise revenue in this timeline, it does not pay.
If it says to maximise revenue across all its copies in the multiverse, it does pay.
The first case—if I have analysed it correctly—is kind-of problematical for AIXI. It would want to self-modify.,,
AIXI is incapable of understanding the concept of copies of itself. In fact, it’s incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
You’ve said that twice now, but where did Dewy do that?
I don’t think he’s published it yet; he did it in an internal FHI meeting. It’s basically an extension of the fact that an uncomputable algorithm looking only at programmable models can’t find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.
Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.
These esoteric problems apparentlly need rephrasing in more practical terms—but then they won’t be problems with AIXI any more.
If there is no multiverse and the coin flip is simply deterministic—perhaps based of the parity of the quadrillionth digit of pi—there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so. AIXI, however, is designed to rule out possibilities once they contradict its observations, so it does not act correctly here.
That seems to be a pretty counter-factual premise, though. There’s pretty good evidence for a multiverse, and you could hack AIXI to do the “right” thing—by giving it a “multiverse-aware” environment and utility function.
“No multiverse” wasn’t the best way to put it. Even in a multiverse, there is only one value of the quadrillionth digit of pi, so modifying AIXI to account for the multiverse does not provide a solution here, since we get the same result as in a single universe.
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where some agent is presented with a problem involving the quadrillionth digit of pi in all the universes.
Once AIXI is told that the coin flip will be over the quadrillionth digit of pi, all other scenarios contradict its observations, so they are ruled out and the utility conditional on them stops being taken into account.
Possibly. If that turns out to be a flaw, then AIXI may need more “adjustment” than just expanding its environment and utility function to include the mulltiverse.
I’m not sure what you mean. Are you saying that you still ascribe significant probability to AIXI paying the mugger?
Uncomputable AIXI being “out-thought” by uncomputable Omega now seems like a fairly hypothetical situation in the first place. I don’t pretend to know what would happen—or even if the question is really meaningful.
Priceless :-)
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where it is the quadrillionth digit of pi in all the universes.