We give strong arguments that the resulting AIξmodel is the most intelligent unbiased agent possible.
Emphasis mine. What you quoted above was me paraphrasing that. Nobody is claiming we can actually implement Solomonoff induction, that’s obviously uncomputable. Think of “possible” here as meaning “possible to formalize”, not “possible to implement on real physics”.
I bring up AIXI only as a formal definition of what is meant by “intelligent agent”, specifically in order to avoid anthropomorphic baggage and to address the original poster’s concerns about measuring machine intelligence by human standards.
An agent (humans included) is “intelligent” only to the degree that it approximates AIXI.
Think of “possible” here as meaning “possible to formalize”
Why not say “possible to formalize”, if that is what is meant?
It is tempting to associate intelligence with optimisation, but there is a problem. Optimising for one thing is pretty automatically optimising against other things, but AI theorists need a concept of general intelligence—optimising accross the board.
AIXI, as a theoretical concept. is a general optimiser, so if AIXI is possible, general optimisation is possible. But AIXI isn’t possible in the relevant sense. You can’t build it out of atoms.
If AIXI, is possible, the “Artificial general optimisation” would make sense. Since it is not possible,
the use of optimisation to clarify the meaning of “intelligence” leaves AGI=AGO as contradictory concept, like a square circle.
An agent (humans included) is “intelligent” only to the degree that it approximates AIXI.
Which brings in the the other problem with AIXI. Humans can model themselves, which is doing something AIXI cannot.
Optimising for one thing is pretty automatically optimising against other things,
Yes.
but AI theorists need a concept of general intelligence—optimising accross the board.
No. There is no optimizing across the board, as you just stated. Optimizing by one standard is optimizing against the inverse of that standard.
But AIXI can optimize by any standard we please, by setting its reward function to reflect that standard. That’s what we mean by an AI being “general” instead of “domain specific” (or “applied”, “narrow”, or “weak” AI). It can learn to optimize any goal. I would be willing to call an AI “general” if it’s at least as general as humans are.
Think of “possible” here as meaning “possible to formalize”
Why not say “possible to formalize”, if that is what is meant?
I DID say that, and you just quoted it! I didn’t say it in advance, and could not have been reasonably expected to, because it was obvious: AIXI is based on Solomonoff induction, which is not computable. No-one is claiming that it is, so stop putting words in my mouth. I cannot hope to predict all possible misinterpretations of what I say in advance (and even if I could, the resulting advance corrections would make the text too long for anyone to bother reading), but can only correct them as they are revealed.
If AIXI, is possible, the “Artificial general optimisation” would make sense. Since it is not possible, the use of optimisation to clarify the meaning of “intelligence” leaves AGI=AGO as contradictory concept, like a square circle.
No, that does not follow. Optimization does not have to be perfect to count as optimization. We’re not claiming that AGI will be as smart as AIXI, just smart enough to be dangerous to humans.
Which brings in the the other problem with AIXI. Humans can model themselves, which is doing something AIXI cannot.
AIXI can model any computable agent. It can’t model itself except approximately (because it’s not computable), but again, humans are no better: We cannot model other humans, or even ourselves, perfectly. A related issue: AIXI is dualistic in the sense that its brain does not inhabit the same universe it can act upon. An agent implemented on real physics would have to be able to deal with this problem once it has access to its own brain, or it may simply wirehead itself. But this is an implementation detail, and not an obstacle to using AIXI as the definition of “the most intelligent unbiased agent possible”.
From the abstract of the AIXI paper:
Emphasis mine. What you quoted above was me paraphrasing that. Nobody is claiming we can actually implement Solomonoff induction, that’s obviously uncomputable. Think of “possible” here as meaning “possible to formalize”, not “possible to implement on real physics”.
I bring up AIXI only as a formal definition of what is meant by “intelligent agent”, specifically in order to avoid anthropomorphic baggage and to address the original poster’s concerns about measuring machine intelligence by human standards.
An agent (humans included) is “intelligent” only to the degree that it approximates AIXI.
Why not say “possible to formalize”, if that is what is meant?
It is tempting to associate intelligence with optimisation, but there is a problem. Optimising for one thing is pretty automatically optimising against other things, but AI theorists need a concept of general intelligence—optimising accross the board.
AIXI, as a theoretical concept. is a general optimiser, so if AIXI is possible, general optimisation is possible. But AIXI isn’t possible in the relevant sense. You can’t build it out of atoms.
If AIXI, is possible, the “Artificial general optimisation” would make sense. Since it is not possible, the use of optimisation to clarify the meaning of “intelligence” leaves AGI=AGO as contradictory concept, like a square circle.
Which brings in the the other problem with AIXI. Humans can model themselves, which is doing something AIXI cannot.
Yes.
No. There is no optimizing across the board, as you just stated. Optimizing by one standard is optimizing against the inverse of that standard.
But AIXI can optimize by any standard we please, by setting its reward function to reflect that standard. That’s what we mean by an AI being “general” instead of “domain specific” (or “applied”, “narrow”, or “weak” AI). It can learn to optimize any goal. I would be willing to call an AI “general” if it’s at least as general as humans are.
I DID say that, and you just quoted it! I didn’t say it in advance, and could not have been reasonably expected to, because it was obvious: AIXI is based on Solomonoff induction, which is not computable. No-one is claiming that it is, so stop putting words in my mouth. I cannot hope to predict all possible misinterpretations of what I say in advance (and even if I could, the resulting advance corrections would make the text too long for anyone to bother reading), but can only correct them as they are revealed.
No, that does not follow. Optimization does not have to be perfect to count as optimization. We’re not claiming that AGI will be as smart as AIXI, just smart enough to be dangerous to humans.
AIXI can model any computable agent. It can’t model itself except approximately (because it’s not computable), but again, humans are no better: We cannot model other humans, or even ourselves, perfectly. A related issue: AIXI is dualistic in the sense that its brain does not inhabit the same universe it can act upon. An agent implemented on real physics would have to be able to deal with this problem once it has access to its own brain, or it may simply wirehead itself. But this is an implementation detail, and not an obstacle to using AIXI as the definition of “the most intelligent unbiased agent possible”.