General Intelligence or Universal Intelligence is the ability to efficiently achieve goals in a wide range of domains.
This tag is specifically for discussing intelligence in the broad sense: for discussion of IQ testing and psychometric intelligence, see IQ / g-factor; for discussion about e.g. specific results in artificial intelligence, see AI. These tags may overlap with this one to the extent that they discuss the nature of general intelligence.
Examples of posts that fall under this tag include The Power of Intelligence, Measuring Optimization Power, Adaption-Executers not Fitness Maximizers, Distinctions in Types of Thought, The Octopus, the Dolphin and Us: a Great Filter tale.
On the difference between psychometric intelligence (IQ) and general intelligence:
But the word “intelligence” commonly evokes pictures of the starving professor with an IQ of 160 and the billionaire CEO with an IQ of merely 120. Indeed there are differences of individual ability apart from “book smarts” which contribute to relative success in the human world: enthusiasm, social skills, education, musical talent, rationality. Note that each factor I listed is cognitive. Social skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet professors of academia, who are chimpanzees. You will not find many acclaimed rationalists, nor artists, nor poets, nor leaders, nor engineers, nor skilled networkers, nor martial artists, nor musical composers who are mice. Intelligence is the foundation of human power, the strength that fuels our other arts.
-- Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
Definitions of General Intelligence
After reviewing extensive literature on the subject, Legg and Hutter[1] summarizes the many possible valuable definitions in the informal statement “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” They then show this definition can be mathematically formalized given reasonable mathematical definitions of its terms. They use Solomonoff induction—a formalization of Occam’s razor—to construct an universal artificial intelligence with a embedded utility function which assigns less utility to those actions based on theories with higher complexity. They argue this final formalization is a valid, meaningful, informative, general, unbiased, fundamental, objective, universal and practical definition of intelligence.
We can relate Legg and Hutter’s definition with the concept of optimization. According to Eliezer Yudkowsky intelligence is efficient cross-domain optimization. It measures an agent’s capacity for efficient cross-domain optimization of the world according to the agent’s preferences.[2] Optimization measures not only the capacity to achieve the desired goal but also is inversely proportional to the amount of resources used. It’s the ability to steer the future so it hits that small target of desired outcomes in the large space of all possible outcomes, using fewer resources as possible. For example, when Deep Blue defeated Kasparov, it was able to hit that small possible outcome where it made the right order of moves given Kasparov’s moves from the very large set of all possible moves. In that domain, it was more optimal than Kasparov. However, Kasparov would have defeated Deep Blue in almost any other relevant domain, and hence, he is considered more intelligent.
One could cast this definition in a possible world vocabulary, intelligence is:
the ability to precisely realize one of the members of a small set of possible future worlds that have a higher preference over the vast set of all other possible worlds with lower preference; while
using fewer resources than the other alternatives paths for getting there; and in the
most diverse domains as possible.
How many more worlds have a higher preference then the one realized by the agent, less intelligent he is. How many more worlds have a lower preference than the one realized by the agent, more intelligent he is. (Or: How much smaller is the set of worlds at least as preferable as the one realized, more intelligent the agent is). How much less paths for realizing the desired world using fewer resources than those spent by the agent, more intelligent he is. And finally, in how many more domains the agent can be more efficiently optimal, more intelligent he is. Restating it, the intelligence of an agent is directly proportional to:
(a) the numbers of worlds with lower preference than the one realized,
(b) how much smaller is the set of paths more efficient than the one taken by the agent and
(c) how more wider are the domains where the agent can effectively realize his preferences;
and it is, accordingly, inversely proportional to:
(d) the numbers of world with higher preference than the one realized,
(e) how much bigger is the set of paths more efficient than the one taken by the agent and
(f) how much more narrow are the domains where the agent can efficiently realize his preferences.
This definition avoids several problems common in many others definitions, especially it avoids anthropomorphizing intelligence.