I have a question. What does it mean for AIXI to be the optimal time bounded AI? If it’s so great, why do people still bother with ANNs and SVNs and SOMs and KNNs and TLAs and T&As? My understanding of it is rather cloudy (as is my understanding of all but the last two of the above), so I’d appreciate clarifaction.
First of all, AIXI isn’t actually “the optimal time bounded AI”. What AIXI is “optimal” for is coming to correct conclusions when given the smallest amount of data, and by “optimal” it means “no other program does better than AIXI in at least one possible world without also doing worse in another”.
Furthermore AIXI itself uses Solomonoff induction directly, and Solomonoff induction is uncomputable. (It can be approximated, though.)
AIXItl is the time-limited version if AIXI, but it amounts to “test all the programs that you can, find the best one, and use that”—and it’s only “optimal” when compared against the programs that it can test, so it’s not actually practical to use, either.
(At least, that’s what I could gather from reading the PDF of the paper on AIXI. Could someone who knows what they’re talking about correct any mistakes?)
There is an enormous difference between the formal mathematical definition of “computable”, and “able to be run by a computer that could be constructed in this universe”. AIXI is computable in the mathematical sense of being written as a computer program that will provably halt in finitely many steps, but it is not computable in the sense of it being possible to run it, even by using all the resources in the observable universe optimally, because the runtime complexity of AIXI is astronomically larger than the universe is.
Yes, I think the term in computational complexity theory is tractability, which is the practical subset of computability.
AIXI is interesting just from a philosophical perspective, but even in the practical sense it has utility in showing what the ultimate limit is, and starting there you can find approximations and optimizations that move you into the land of the tractable.
For an example analogy, in computer graphics we have full blown particle ray tracing as the most accurate theory at the limits, and starting with that and then speeding it up with approximations that minimize the loss of accuracy is a good strategy.
The monte carlo approximation to AI is tractable and it can play small games (fairly well?).
For a more practical AGI design on a limited budget, its probably best to use hierarchical approximate simulation, more along the lines of what the mammalian cortex appears to do.
I have a question. What does it mean for AIXI to be the optimal time bounded AI? If it’s so great, why do people still bother with ANNs and SVNs and SOMs and KNNs and TLAs and T&As? My understanding of it is rather cloudy (as is my understanding of all but the last two of the above), so I’d appreciate clarifaction.
First of all, AIXI isn’t actually “the optimal time bounded AI”. What AIXI is “optimal” for is coming to correct conclusions when given the smallest amount of data, and by “optimal” it means “no other program does better than AIXI in at least one possible world without also doing worse in another”.
Furthermore AIXI itself uses Solomonoff induction directly, and Solomonoff induction is uncomputable. (It can be approximated, though.)
AIXItl is the time-limited version if AIXI, but it amounts to “test all the programs that you can, find the best one, and use that”—and it’s only “optimal” when compared against the programs that it can test, so it’s not actually practical to use, either.
(At least, that’s what I could gather from reading the PDF of the paper on AIXI. Could someone who knows what they’re talking about correct any mistakes?)
There is an enormous difference between the formal mathematical definition of “computable”, and “able to be run by a computer that could be constructed in this universe”. AIXI is computable in the mathematical sense of being written as a computer program that will provably halt in finitely many steps, but it is not computable in the sense of it being possible to run it, even by using all the resources in the observable universe optimally, because the runtime complexity of AIXI is astronomically larger than the universe is.
‘Astronomically’? That’s the first time I’ve seen that superlative inadequate for the job.
AIXI’s decision procedure is not computable (but AIXItl’s is). (Link)
Yes, I think the term in computational complexity theory is tractability, which is the practical subset of computability.
AIXI is interesting just from a philosophical perspective, but even in the practical sense it has utility in showing what the ultimate limit is, and starting there you can find approximations and optimizations that move you into the land of the tractable.
For an example analogy, in computer graphics we have full blown particle ray tracing as the most accurate theory at the limits, and starting with that and then speeding it up with approximations that minimize the loss of accuracy is a good strategy.
The monte carlo approximation to AI is tractable and it can play small games (fairly well?).
For a more practical AGI design on a limited budget, its probably best to use hierarchical approximate simulation, more along the lines of what the mammalian cortex appears to do.
Are you familiar with Big O? See also ‘constant factor’.
(You may not be interested in the topic, but an understanding of Big O and constant factors is one of the taken-for-granted pieces of knowledge here.)
You mean that horrible Batman knockoff? I hated it.
Yeah, I know what Big O is.