Think of “possible” here as meaning “possible to formalize”
Why not say “possible to formalize”, if that is what is meant?
I DID say that, and you just quoted it! I didn’t say it in advance, and could not have been reasonably expected to, because it was obvious: AIXI is based on Solomonoff induction, which is not computable. No-one is claiming that it is, so stop putting words in my mouth. I cannot hope to predict all possible misinterpretations of what I say in advance (and even if I could, the resulting advance corrections would make the text too long for anyone to bother reading), but can only correct them as they are revealed.
If AIXI, is possible, the “Artificial general optimisation” would make sense. Since it is not possible, the use of optimisation to clarify the meaning of “intelligence” leaves AGI=AGO as contradictory concept, like a square circle.
No, that does not follow. Optimization does not have to be perfect to count as optimization. We’re not claiming that AGI will be as smart as AIXI, just smart enough to be dangerous to humans.
Which brings in the the other problem with AIXI. Humans can model themselves, which is doing something AIXI cannot.
AIXI can model any computable agent. It can’t model itself except approximately (because it’s not computable), but again, humans are no better: We cannot model other humans, or even ourselves, perfectly. A related issue: AIXI is dualistic in the sense that its brain does not inhabit the same universe it can act upon. An agent implemented on real physics would have to be able to deal with this problem once it has access to its own brain, or it may simply wirehead itself. But this is an implementation detail, and not an obstacle to using AIXI as the definition of “the most intelligent unbiased agent possible”.
I DID say that, and you just quoted it! I didn’t say it in advance, and could not have been reasonably expected to, because it was obvious: AIXI is based on Solomonoff induction, which is not computable. No-one is claiming that it is, so stop putting words in my mouth. I cannot hope to predict all possible misinterpretations of what I say in advance (and even if I could, the resulting advance corrections would make the text too long for anyone to bother reading), but can only correct them as they are revealed.
No, that does not follow. Optimization does not have to be perfect to count as optimization. We’re not claiming that AGI will be as smart as AIXI, just smart enough to be dangerous to humans.
AIXI can model any computable agent. It can’t model itself except approximately (because it’s not computable), but again, humans are no better: We cannot model other humans, or even ourselves, perfectly. A related issue: AIXI is dualistic in the sense that its brain does not inhabit the same universe it can act upon. An agent implemented on real physics would have to be able to deal with this problem once it has access to its own brain, or it may simply wirehead itself. But this is an implementation detail, and not an obstacle to using AIXI as the definition of “the most intelligent unbiased agent possible”.