There is no such thing as general intelligence, i.e. an algorithm that is “capable of behaving intelligently over many domains” if not specifically designed for these domain(s).
Sure there is—see:
Legg, Shane Tests of Machine Intelligence. Shane Legg and Marcus Hutter. In Proc. 50th Anniversary Summit of Artificial Intelligence, Monte Verità, Switzerland. 2007.
Of course you’re right in the strictest sense! I should have included something along the lines of “an algorithm that can be efficiently computed”, this was already discussed in other comments.
IMO, it is best to think of power and breadth being two orthogonal dimensions—like this.
narrow <-> broad;
weak <-> powerful.
The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there’s the idea that if you are broad, you can’t be very deep, and be able to be computed quickly. I don’t think that idea is correct.
I would compare the idea to saying that we can’t build a general-purpose compressor. However: yes we can.
I don’t think the idea that “there is no such thing as general intelligence” can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
That is a very good point, with wideness orthogonal to power.
Evolution is broad but weak.
Humans (and presumably AGI) are broad and powerful.
Expert systems are narrow and powerful.
Anything weak and narrow can barely be called intelligent.
I don’t care about that specific formulation of the idea; maybe Robin Hanson’s formulation that there exists no “grand unified theory of intelligence” is clearer? (link)
After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?
...but the answer seems simple. A big part of “betterness” is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn’t immediately suggest implementation strategy—which is what we need. So: more discoveries relating to this seem likely.
To me it seems a lot like the question of whether general, computationally tractable methods of compression exist.
Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam’s razor, I would say that the answer is just “yes, they do”.
Sure there is—see:
Legg, Shane Tests of Machine Intelligence. Shane Legg and Marcus Hutter. In Proc. 50th Anniversary Summit of Artificial Intelligence, Monte Verità, Switzerland. 2007.
Hutter, M.: Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer, Berlin (2004)
Hernández-Orallo, J., Dowe, D.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence. 17, 1508-1539 (2010)
Solomonoff, R. J.: A Formal Theory of Inductive Inference: Parts 1 and 2. Information and Control 7, 1-22 and 224-254 (1964).
The only assumption about the environment is that Occam’s razor applies to it.
Of course you’re right in the strictest sense! I should have included something along the lines of “an algorithm that can be efficiently computed”, this was already discussed in other comments.
IMO, it is best to think of power and breadth being two orthogonal dimensions—like this.
narrow <-> broad;
weak <-> powerful.
The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there’s the idea that if you are broad, you can’t be very deep, and be able to be computed quickly. I don’t think that idea is correct.
I would compare the idea to saying that we can’t build a general-purpose compressor. However: yes we can.
I don’t think the idea that “there is no such thing as general intelligence” can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
That is a very good point, with wideness orthogonal to power.
Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.
I don’t care about that specific formulation of the idea; maybe Robin Hanson’s formulation that there exists no “grand unified theory of intelligence” is clearer? (link)
Clear—but also clearly wrong. Robin Hanson says:
...but the answer seems simple. A big part of “betterness” is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn’t immediately suggest implementation strategy—which is what we need. So: more discoveries relating to this seem likely.
Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.
To me it seems a lot like the question of whether general, computationally tractable methods of compression exist.
Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam’s razor, I would say that the answer is just “yes, they do”.