To make things easier to analyze, consider an AIXI variant where we replace the universal prior with a prior that assigns .5 probability to each of just two possible environments: one where Omega’s coin lands heads, and one where it lands tails. Once this AIXI variant is told that the coin landed tails, it updates the probability distribution and now assigns 1 to the second environment, and its expected utility computation now says “not pay” maximized EU.
It used to, as Tim notes, but I’m not so sure now. AIXI works with its distribution over programs and sequences of observations, not with states of a world and its properties. If presented with a sequence of observations generated by a program, it quickly figures out what the following observations are, but it’s more tricky here.
With other types of agents, we usually need to stipulate that the problem statement is somehow made clear to the agent. The way in which this could be achieved is not specified, and it seems very difficult to arrange through presenting an actual sequence of observations. So the shortcut is to draw the problem “directly” on agent’s mind in terms of agent’s ontology, and usually it’s possible in a moderately natural way. This all takes place apart from the agent observing the state of the coin.
However in case of AIXI, it’s not as clear how the elements of the problem setting should be expressed in terms of its ontology. Basically, we have two worlds corresponding to the different coin states, which could for simplicity be assumed to be generated by two programs. The first idea is to identify the programs generating these worlds with relevant AIXI’s hypotheses, so that observing “tails” excludes the “heads”-programs, and therefore the “heads”-world, from consideration.
But there are many possible “tails”-programs, and AIXI’s response depends on their distribution. For example, the choice of a particular “tails”-program could represent the state of other worlds. What does it say about this distribution that the problem statement was properly explained to the AIXI agent? It must necessarily be more than just observing “tails”, the same as for other types of agents (if you only toss a coin and it falls “tails”, this observation alone doesn’t incite me to pay up). Perhaps “tails”-programs that properly model CM also imply paying the mugger.
AIXI is incapable of understanding the concept of copies of itself. In fact, it’s incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
I don’t think he’s published it yet; he did it in an internal FHI meeting. It’s basically an extension of the fact that an uncomputable algorithm looking only at programmable models can’t find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.
Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.
These esoteric problems apparentlly need rephrasing in more practical terms—but then they won’t be problems with AIXI any more.
If it says to maximise revenue across all its copies in the multiverse, it should pay.
If there is no multiverse and the coin flip is simply deterministic—perhaps based of the parity of the quadrillionth digit of pi—there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so. AIXI, however, is designed to rule out possibilities once they contradict its observations, so it does not act correctly here.
If there is no multiverse and the coin flip is simply deterministic—perhaps based of the parity of the quadrillionth digit of pi—there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so.
That seems to be a pretty counter-factual premise, though. There’s pretty good evidence for a multiverse, and you could hack AIXI to do the “right” thing—by giving it a “multiverse-aware” environment and utility function.
“No multiverse” wasn’t the best way to put it. Even in a multiverse, there is only one value of the quadrillionth digit of pi, so modifying AIXI to account for the multiverse does not provide a solution here, since we get the same result as in a single universe.
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where some agent is presented with a problem involving the quadrillionth digit of pi in all the universes.
Once AIXI is told that the coin flip will be over the quadrillionth digit of pi, all other scenarios contradict its observations, so they are ruled out and the utility conditional on them stops being taken into account.
Possibly. If that turns out to be a flaw, then AIXI may need more “adjustment” than just expanding its environment and utility function to include the mulltiverse.
Uncomputable AIXI being “out-thought” by uncomputable Omega now seems like a fairly hypothetical situation in the first place. I don’t pretend to know what would happen—or even if the question is really meaningful.
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where it is the quadrillionth digit of pi in all the universes.
To make things easier to analyze, consider an AIXI variant where we replace the universal prior with a prior that assigns .5 probability to each of just two possible environments: one where Omega’s coin lands heads, and one where it lands tails. Once this AIXI variant is told that the coin landed tails, it updates the probability distribution and now assigns 1 to the second environment, and its expected utility computation now says “not pay” maximized EU.
Does that make sense?
It used to, as Tim notes, but I’m not so sure now. AIXI works with its distribution over programs and sequences of observations, not with states of a world and its properties. If presented with a sequence of observations generated by a program, it quickly figures out what the following observations are, but it’s more tricky here.
With other types of agents, we usually need to stipulate that the problem statement is somehow made clear to the agent. The way in which this could be achieved is not specified, and it seems very difficult to arrange through presenting an actual sequence of observations. So the shortcut is to draw the problem “directly” on agent’s mind in terms of agent’s ontology, and usually it’s possible in a moderately natural way. This all takes place apart from the agent observing the state of the coin.
However in case of AIXI, it’s not as clear how the elements of the problem setting should be expressed in terms of its ontology. Basically, we have two worlds corresponding to the different coin states, which could for simplicity be assumed to be generated by two programs. The first idea is to identify the programs generating these worlds with relevant AIXI’s hypotheses, so that observing “tails” excludes the “heads”-programs, and therefore the “heads”-world, from consideration.
But there are many possible “tails”-programs, and AIXI’s response depends on their distribution. For example, the choice of a particular “tails”-program could represent the state of other worlds. What does it say about this distribution that the problem statement was properly explained to the AIXI agent? It must necessarily be more than just observing “tails”, the same as for other types of agents (if you only toss a coin and it falls “tails”, this observation alone doesn’t incite me to pay up). Perhaps “tails”-programs that properly model CM also imply paying the mugger.
I don’t understand. Isn’t the biggest missing piece (an) AIXI’s precise utility function, rather than its uncertainty?
It makes sense, but the conclusion apparentlly depends on how AIXI’s utility function is written. Assuming it knows Omega is trustworthy...
If AIXI’s utility function says to maximise revenue in this timeline, it does not pay.
If it says to maximise revenue across all its copies in the multiverse, it does pay.
The first case—if I have analysed it correctly—is kind-of problematical for AIXI. It would want to self-modify.,,
AIXI is incapable of understanding the concept of copies of itself. In fact, it’s incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
You’ve said that twice now, but where did Dewy do that?
I don’t think he’s published it yet; he did it in an internal FHI meeting. It’s basically an extension of the fact that an uncomputable algorithm looking only at programmable models can’t find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.
Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.
These esoteric problems apparentlly need rephrasing in more practical terms—but then they won’t be problems with AIXI any more.
If there is no multiverse and the coin flip is simply deterministic—perhaps based of the parity of the quadrillionth digit of pi—there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so. AIXI, however, is designed to rule out possibilities once they contradict its observations, so it does not act correctly here.
That seems to be a pretty counter-factual premise, though. There’s pretty good evidence for a multiverse, and you could hack AIXI to do the “right” thing—by giving it a “multiverse-aware” environment and utility function.
“No multiverse” wasn’t the best way to put it. Even in a multiverse, there is only one value of the quadrillionth digit of pi, so modifying AIXI to account for the multiverse does not provide a solution here, since we get the same result as in a single universe.
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where some agent is presented with a problem involving the quadrillionth digit of pi in all the universes.
Once AIXI is told that the coin flip will be over the quadrillionth digit of pi, all other scenarios contradict its observations, so they are ruled out and the utility conditional on them stops being taken into account.
Possibly. If that turns out to be a flaw, then AIXI may need more “adjustment” than just expanding its environment and utility function to include the mulltiverse.
I’m not sure what you mean. Are you saying that you still ascribe significant probability to AIXI paying the mugger?
Uncomputable AIXI being “out-thought” by uncomputable Omega now seems like a fairly hypothetical situation in the first place. I don’t pretend to know what would happen—or even if the question is really meaningful.
Priceless :-)
I don’t think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where it is the quadrillionth digit of pi in all the universes.