I agree. It is the task of the intelligence to decide how “efficiently” will solve a particular task. A greater intelligence may decide to pack it together with some other problems and to solve it that way, many at once. It’s less efficient form the point of view of this problem, but not from a broader perspective.
It is also not always that the time what’s crucial, maybe the energy spent or the nerves of the boss spared or something else.
The more and the stronger motives served, would be a better definition of a greater intelligence.
Suppose agent A has goal G, and agent B has goal H (assumed to be incompatible). Put both agents in the same world. If you reliably end up with state G, then we say that A has greater optimization power.
I guess there’s a hypothesis (though I don’t know if this has been discussed much here) that this definition of optimization power is robust, i.e. you can assign each agent a score, and one agent will reliably win over another if the difference in score is great enough.
If the world is complex and uncertain then this will necessarily be “cross-domain” optimization power, because there will be enough novelty and variety in the sorts of tasks the agents will need to complete that they can’t just have everything programmed in explicitly at the start.
So optimization power determines who ends up ruling the world—it’s the thing that we really care about here.
But you can improve the optimization power of many kinds of agent just by adding some resource (such as money or computer hardware). This is relatively straightforward and doesn’t constitute an innovation. But to improve the resource->optimization_power function, you do need innovation and this is what we’re trying to capture by the word “intelligence”.
(Just to make it clear, here I’m talking about innovation generating intelligence not intelligence generating innovations).
But we don’t always expect optimization power to scale linearly with resources, so I think Robin Hanson may be closer to the mark with his “production function” model, than Yudkowsky with his “divide one thing by the other” model. If you give me so much money that I’m no longer getting much marginal value from it, you’re not actually making me stupider.
Suppose agent A has goal G, and agent B has goal H (assumed to be incompatible). Put both agents in the same world. If you reliably end up with state G, then we say that A has greater optimization power.
Fitnesses are dependent on the environment, though. So: if agent A has goal GA, B has goal GB and C has goal CG, and A and B produce GA, B and C produce GB and C and A produce GC then you can’t just assign scalar fitnesses to each agent and expect that to work. That could happen with circular predation, for example.
If you do want to assign scalar fitnesses to organisms—in order to compare them—I think you have to do something like testing them on a standard suite of test environments.
If you give me so much money that I’m no longer getting much marginal value from it, you’re not actually making me stupider.
In Yudkowsky’s model, it’s resources needed to complete the task. If you can solve problem x for 5 dollars, and I give you a million dollars, you can still solve problem x for 5 dollars.
There’s another flaw in the model which I presented, which is that I was only thinking about goals which conflict with other agents’ goals. “Solve problem x for $5”-type tasks may not fall into that category, but may still require a lot of “intelligence” to solve. (Although narrow intelligence may be enough).
It has the Efficient Cross-Domain Optimization business—again.
I can’t say I approve. My 2 cents on the issue.
I agree. It is the task of the intelligence to decide how “efficiently” will solve a particular task. A greater intelligence may decide to pack it together with some other problems and to solve it that way, many at once. It’s less efficient form the point of view of this problem, but not from a broader perspective.
It is also not always that the time what’s crucial, maybe the energy spent or the nerves of the boss spared or something else.
The more and the stronger motives served, would be a better definition of a greater intelligence.
Suppose agent A has goal G, and agent B has goal H (assumed to be incompatible). Put both agents in the same world. If you reliably end up with state G, then we say that A has greater optimization power.
I guess there’s a hypothesis (though I don’t know if this has been discussed much here) that this definition of optimization power is robust, i.e. you can assign each agent a score, and one agent will reliably win over another if the difference in score is great enough.
If the world is complex and uncertain then this will necessarily be “cross-domain” optimization power, because there will be enough novelty and variety in the sorts of tasks the agents will need to complete that they can’t just have everything programmed in explicitly at the start.
So optimization power determines who ends up ruling the world—it’s the thing that we really care about here.
But you can improve the optimization power of many kinds of agent just by adding some resource (such as money or computer hardware). This is relatively straightforward and doesn’t constitute an innovation. But to improve the resource->optimization_power function, you do need innovation and this is what we’re trying to capture by the word “intelligence”.
(Just to make it clear, here I’m talking about innovation generating intelligence not intelligence generating innovations).
But we don’t always expect optimization power to scale linearly with resources, so I think Robin Hanson may be closer to the mark with his “production function” model, than Yudkowsky with his “divide one thing by the other” model. If you give me so much money that I’m no longer getting much marginal value from it, you’re not actually making me stupider.
Fitnesses are dependent on the environment, though. So: if agent A has goal GA, B has goal GB and C has goal CG, and A and B produce GA, B and C produce GB and C and A produce GC then you can’t just assign scalar fitnesses to each agent and expect that to work. That could happen with circular predation, for example.
If you do want to assign scalar fitnesses to organisms—in order to compare them—I think you have to do something like testing them on a standard suite of test environments.
In Yudkowsky’s model, it’s resources needed to complete the task. If you can solve problem x for 5 dollars, and I give you a million dollars, you can still solve problem x for 5 dollars.
Good point—I’d missed that particular subtlety.
There’s another flaw in the model which I presented, which is that I was only thinking about goals which conflict with other agents’ goals. “Solve problem x for $5”-type tasks may not fall into that category, but may still require a lot of “intelligence” to solve. (Although narrow intelligence may be enough).