Optimization power is a processes’ capacity for reshaping the world according to its preferences.
Intelligence is optimization power divided by the resources used.
“Intelligence” is also sometimes used to talk about whatever is being measured by popular tests of “intelligence,” like IQ tests.
Rationality refers to both epistemic and instrumental rationality: the craft of obtaining true beliefs and of achieving one’s goals. Also known as systematized winning.
If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.
We don’t actually have units of ‘resources’ or optimization power, but I think the idea would be that any non-stupid agent should at least triple its optimization power when you triple its resources, and possibly more. As a general rule, if I have three times as much stuff as I used to have, I can at the very least do what I was already doing but three times simultaneously, and hopefully pool my resources and do something even better.
Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.
This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.
If intelligence=optimization power/resources used, this might well be the case. Nonetheless, this “intelligence implosion” would still involve entities with increasing resources and thus increasing optimization power. A stupid agent with a lot of optimization power (Clippy) is still dangerous.
What I’m arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It’s not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).
I don’t think he really said this. The exact quote is
If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used. Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources. Intelligence, in other words, is efficient optimization.
This seems like just a list of different measurements trying to convey the idea of efficiency.
When we want something to be efficient, we really just mean that we have other things to use our resources for. The right way to measure this is in terms of the marginal utility of the other uses of resources. Efficiency is therefore important, but trying to calculate efficiency by dividing is oversimplifying.
For any agent, I can create a GLUT that solves problems just as well (provided the vast computing resources necessary to store it), by just duplicating that agent’s actions in all of its possible states.
Surely its performance would be appalling on most problems—vastly inferior to a genuinely intellligent agent implemented with the same hardware technology—and so it will fail to solve many of the problems with time constraints. The idea of a GLUT seems highly impractical. However, if you really think that it would be a good way to construct an intelligent machine, go right ahead.
vastly inferior to a genuinely intellligent agent implemented with the same hardware technology
I agree. That’s the point of the original comment- that “efficient use of resources” is as much a factor in our concept of intelligence as is “cross-domain problem-solving ability”. A GLUT could have the latter, but not the former, attribute.
“Cross-domain problem-solving ability” implicitly includes the idea that some types of problem may involve resource constraints. The issue is whether that point needs further explicit emphasis—in an informal definition of intelligence.
Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn’t know what to put in the table. But if we’re in infinite theory land, then why not just run AIXI on your infinite computer?
Back in reality, the lookup table approach isn’t going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.
You misunderstand me. I’m pointing out that a GLUT is an example of something with (potentially) immense optimization power, but whose use of computational resources is ridiculously prodigal, and which we might hesitate to call truly intelligent. This is evidence that our concept of intelligence does in fact include some notion of efficiency, even if people don’t think of this aspect without prompting.
Right, but the problem with this counter example is that it isn’t actually possible. A counter example that could occur would be much more convincing.
Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe… I’d be happy considering it to be extremely intelligent.
It’s infeasible within our physics, but it’s possible for (say) our world to be a simulation within a universe of vaster computing power, and to have a GLUT from that world interact with our simulation. I’d say that such a GLUT was extremely powerful, but (once I found out what it really was) I wouldn’t call it intelligent- though I’d expect whatever process produced it (e.g. coded in all of the theorem-proof and problem-solution pairs) to be a different and more intelligent sort of process.
That is, a GLUT is the optimizer equivalent of a tortoise with the world on its back- it needs to be supported on something, and it would be highly unlikely to be tortoises all the way down.
What Intelligence Tests Miss is a book about the difference between intelligence and rationality. The linked LW-article about the book should answer your questions about the difference between the two.
A short answer would be that intelligence describes how well you think, but not some important traits and knowledge like: Do you use your intelligence (are you a reflective person), do you have a strong need for closure, can you override your intuitions, do you know Bayes-theorem, probability theory, or logic?
What exactly is the difference in meaning of “intelligence”, “rationality”, and “optimization power” as used on this site?
Optimization power is a processes’ capacity for reshaping the world according to its preferences.
Intelligence is optimization power divided by the resources used.
“Intelligence” is also sometimes used to talk about whatever is being measured by popular tests of “intelligence,” like IQ tests.
Rationality refers to both epistemic and instrumental rationality: the craft of obtaining true beliefs and of achieving one’s goals. Also known as systematized winning.
If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.
We don’t actually have units of ‘resources’ or optimization power, but I think the idea would be that any non-stupid agent should at least triple its optimization power when you triple its resources, and possibly more. As a general rule, if I have three times as much stuff as I used to have, I can at the very least do what I was already doing but three times simultaneously, and hopefully pool my resources and do something even better.
For “optimization power”, we do now have some fairly reasonable tests:
AIQ
Generic Compression Benchmark
Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.
This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.
If intelligence=optimization power/resources used, this might well be the case. Nonetheless, this “intelligence implosion” would still involve entities with increasing resources and thus increasing optimization power. A stupid agent with a lot of optimization power (Clippy) is still dangerous.
I agree that it would be dangerous.
What I’m arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It’s not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).
I checked with: A Collection of Definitions of Intelligence.
Out of 71 definitions, only two mentioned resources:
The paper suggests that the nearest thing to a consensus is that intelligence is about problem-solving ability in a wide range of environments.
Yes, Yudkowsky apparently says otherwise—but: so what?
I don’t think he really said this. The exact quote is
This seems like just a list of different measurements trying to convey the idea of efficiency.
When we want something to be efficient, we really just mean that we have other things to use our resources for. The right way to measure this is in terms of the marginal utility of the other uses of resources. Efficiency is therefore important, but trying to calculate efficiency by dividing is oversimplifying.
What about a giant look-up table, then?
That requires lots of computing resources. (I think that’s the answer.)
That would surely be very bad at solving problems in a wide range of environments.
For any agent, I can create a GLUT that solves problems just as well (provided the vast computing resources necessary to store it), by just duplicating that agent’s actions in all of its possible states.
Surely its performance would be appalling on most problems—vastly inferior to a genuinely intellligent agent implemented with the same hardware technology—and so it will fail to solve many of the problems with time constraints. The idea of a GLUT seems highly impractical. However, if you really think that it would be a good way to construct an intelligent machine, go right ahead.
I agree. That’s the point of the original comment- that “efficient use of resources” is as much a factor in our concept of intelligence as is “cross-domain problem-solving ability”. A GLUT could have the latter, but not the former, attribute.
“Cross-domain problem-solving ability” implicitly includes the idea that some types of problem may involve resource constraints. The issue is whether that point needs further explicit emphasis—in an informal definition of intelligence.
Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn’t know what to put in the table. But if we’re in infinite theory land, then why not just run AIXI on your infinite computer?
Back in reality, the lookup table approach isn’t going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.
You misunderstand me. I’m pointing out that a GLUT is an example of something with (potentially) immense optimization power, but whose use of computational resources is ridiculously prodigal, and which we might hesitate to call truly intelligent. This is evidence that our concept of intelligence does in fact include some notion of efficiency, even if people don’t think of this aspect without prompting.
Right, but the problem with this counter example is that it isn’t actually possible. A counter example that could occur would be much more convincing.
Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe… I’d be happy considering it to be extremely intelligent.
It’s infeasible within our physics, but it’s possible for (say) our world to be a simulation within a universe of vaster computing power, and to have a GLUT from that world interact with our simulation. I’d say that such a GLUT was extremely powerful, but (once I found out what it really was) I wouldn’t call it intelligent- though I’d expect whatever process produced it (e.g. coded in all of the theorem-proof and problem-solution pairs) to be a different and more intelligent sort of process.
That is, a GLUT is the optimizer equivalent of a tortoise with the world on its back- it needs to be supported on something, and it would be highly unlikely to be tortoises all the way down.
A ‘featherless biped’ definition. That is, it’s decent attempt at a simplified proxy but massively breaks down if you search for exceptions.
What Intelligence Tests Miss is a book about the difference between intelligence and rationality. The linked LW-article about the book should answer your questions about the difference between the two.
A short answer would be that intelligence describes how well you think, but not some important traits and knowledge like: Do you use your intelligence (are you a reflective person), do you have a strong need for closure, can you override your intuitions, do you know Bayes-theorem, probability theory, or logic?
“Intelligence” is often defined as being the “g-factor” of humans—which is a pretty sucky definition of “rationality”.
Go to definitions of “intelligence” used by machine intelligence researchers and it’s much closer to “rationality”.