“Intelligence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship.
The word “intelligence”, properly used, is not an explanation. It is a name for a phenomenon that at present we have no explanation for. And because we have no explanation for it, we can define it only by describing what it looks like. None of the various descriptions that people have come up with are explanations, and we cannot even have confidence that any of them draws a line that corresponds to a real division in the world.
As an analogy, how could someone answer the question “what is iron ore?” in pre-modern times? Only by saying that it’s a sort of rock looking thus-and-so from which iron can be obtained by a certain process. Saying “this is iron ore” is not an explanation of the fact that you can get iron from it, it is simply a statement of that fact.
Saying, “this hypothetical creature is an artificial superintelligence” is merely to say that one has imagined something that does intelligence supremely better than we do. To say that it would therefore have an advantage in taking over ancient Rome is to say that the skill of intelligence would be useful for that purpose, and the more the better, whereas the ability to lift heavy things, leap tall buildings, or see in the dark would be of at most minor relevance to the task.
That’s just another description of the results that intelligence obtains. By contrast, this explains why you can get iron from the rocks you can get it from.
I am not persuaded that AIXI is a step towards AGI. When I look at the field of AGI it is as if I am looking at complexity theory at a stage when the concept of NP-completeness had not been worked out.
Imagine an alternate history of complexity theory, in which at some stage we knew of a handful of problems that seemed really hard to solve efficiently, but an efficient solution to one would solve them all. If someone then discovered a new problem that turned out to be equivalent to these known ones, it might be greeted as offering a new approach to finding an efficient solution—solve this problem, and all of those others will be solved.
But we know that wouldn’t have worked. When a new problem is proved NP-complete, that doesn’t give us a new way to find efficient solutions to NP-complete problems. It just gives us a new example of a hard problem.
Look at all the approaches to AGI that have been proposed. Logic was mechanised, and people said, “Now we can make an intelligent machine.” That didn’t pan out. “Good enough heuristics will be intelligent!” “A huge pile of ‘common sense’ knowledge and a logic engine will be intelligent!” “Really good compression would be equivalent to AGI!” “Solomonoff induction is equivalent to AGI!”
So nowadays, when someone says “solve this new problem and it will be an AGI!” I take that to be a proof that the new problem is just as hard as the old ones, and that no new understanding has been gained about how to make an AGI.
The analogy with complexity theory breaks down in one important way: there are reasons to think that P != NP is not merely a mathematical, but a physical law (I don’t have an exact reference, but Scott Aaronson has said this somewhere), but we already have an existence proof for human-level intelligence: us. So we know there is a solution that we just haven’t found.
The word “intelligence”, properly used, is not an explanation. It is a name for a phenomenon that at present we have no explanation for. And because we have no explanation for it, we can define it only by describing what it looks like. None of the various descriptions that people have come up with are explanations, and we cannot even have confidence that any of them draws a line that corresponds to a real division in the world.
As an analogy, how could someone answer the question “what is iron ore?” in pre-modern times? Only by saying that it’s a sort of rock looking thus-and-so from which iron can be obtained by a certain process. Saying “this is iron ore” is not an explanation of the fact that you can get iron from it, it is simply a statement of that fact.
Saying, “this hypothetical creature is an artificial superintelligence” is merely to say that one has imagined something that does intelligence supremely better than we do. To say that it would therefore have an advantage in taking over ancient Rome is to say that the skill of intelligence would be useful for that purpose, and the more the better, whereas the ability to lift heavy things, leap tall buildings, or see in the dark would be of at most minor relevance to the task.
Approximation of Solomonoff induction?
That’s just another description of the results that intelligence obtains. By contrast, this explains why you can get iron from the rocks you can get it from.
Well, you can design explicit approximations of Solomonoff induction like AIXI, they’re just intractable.
I am not persuaded that AIXI is a step towards AGI. When I look at the field of AGI it is as if I am looking at complexity theory at a stage when the concept of NP-completeness had not been worked out.
Imagine an alternate history of complexity theory, in which at some stage we knew of a handful of problems that seemed really hard to solve efficiently, but an efficient solution to one would solve them all. If someone then discovered a new problem that turned out to be equivalent to these known ones, it might be greeted as offering a new approach to finding an efficient solution—solve this problem, and all of those others will be solved.
But we know that wouldn’t have worked. When a new problem is proved NP-complete, that doesn’t give us a new way to find efficient solutions to NP-complete problems. It just gives us a new example of a hard problem.
Look at all the approaches to AGI that have been proposed. Logic was mechanised, and people said, “Now we can make an intelligent machine.” That didn’t pan out. “Good enough heuristics will be intelligent!” “A huge pile of ‘common sense’ knowledge and a logic engine will be intelligent!” “Really good compression would be equivalent to AGI!” “Solomonoff induction is equivalent to AGI!”
So nowadays, when someone says “solve this new problem and it will be an AGI!” I take that to be a proof that the new problem is just as hard as the old ones, and that no new understanding has been gained about how to make an AGI.
The analogy with complexity theory breaks down in one important way: there are reasons to think that P != NP is not merely a mathematical, but a physical law (I don’t have an exact reference, but Scott Aaronson has said this somewhere), but we already have an existence proof for human-level intelligence: us. So we know there is a solution that we just haven’t found.