That’s just another description of the results that intelligence obtains. By contrast, this explains why you can get iron from the rocks you can get it from.
I am not persuaded that AIXI is a step towards AGI. When I look at the field of AGI it is as if I am looking at complexity theory at a stage when the concept of NP-completeness had not been worked out.
Imagine an alternate history of complexity theory, in which at some stage we knew of a handful of problems that seemed really hard to solve efficiently, but an efficient solution to one would solve them all. If someone then discovered a new problem that turned out to be equivalent to these known ones, it might be greeted as offering a new approach to finding an efficient solution—solve this problem, and all of those others will be solved.
But we know that wouldn’t have worked. When a new problem is proved NP-complete, that doesn’t give us a new way to find efficient solutions to NP-complete problems. It just gives us a new example of a hard problem.
Look at all the approaches to AGI that have been proposed. Logic was mechanised, and people said, “Now we can make an intelligent machine.” That didn’t pan out. “Good enough heuristics will be intelligent!” “A huge pile of ‘common sense’ knowledge and a logic engine will be intelligent!” “Really good compression would be equivalent to AGI!” “Solomonoff induction is equivalent to AGI!”
So nowadays, when someone says “solve this new problem and it will be an AGI!” I take that to be a proof that the new problem is just as hard as the old ones, and that no new understanding has been gained about how to make an AGI.
The analogy with complexity theory breaks down in one important way: there are reasons to think that P != NP is not merely a mathematical, but a physical law (I don’t have an exact reference, but Scott Aaronson has said this somewhere), but we already have an existence proof for human-level intelligence: us. So we know there is a solution that we just haven’t found.
That’s just another description of the results that intelligence obtains. By contrast, this explains why you can get iron from the rocks you can get it from.
Well, you can design explicit approximations of Solomonoff induction like AIXI, they’re just intractable.
I am not persuaded that AIXI is a step towards AGI. When I look at the field of AGI it is as if I am looking at complexity theory at a stage when the concept of NP-completeness had not been worked out.
Imagine an alternate history of complexity theory, in which at some stage we knew of a handful of problems that seemed really hard to solve efficiently, but an efficient solution to one would solve them all. If someone then discovered a new problem that turned out to be equivalent to these known ones, it might be greeted as offering a new approach to finding an efficient solution—solve this problem, and all of those others will be solved.
But we know that wouldn’t have worked. When a new problem is proved NP-complete, that doesn’t give us a new way to find efficient solutions to NP-complete problems. It just gives us a new example of a hard problem.
Look at all the approaches to AGI that have been proposed. Logic was mechanised, and people said, “Now we can make an intelligent machine.” That didn’t pan out. “Good enough heuristics will be intelligent!” “A huge pile of ‘common sense’ knowledge and a logic engine will be intelligent!” “Really good compression would be equivalent to AGI!” “Solomonoff induction is equivalent to AGI!”
So nowadays, when someone says “solve this new problem and it will be an AGI!” I take that to be a proof that the new problem is just as hard as the old ones, and that no new understanding has been gained about how to make an AGI.
The analogy with complexity theory breaks down in one important way: there are reasons to think that P != NP is not merely a mathematical, but a physical law (I don’t have an exact reference, but Scott Aaronson has said this somewhere), but we already have an existence proof for human-level intelligence: us. So we know there is a solution that we just haven’t found.