Surely this doesn’t increase intelligence just optimization power. If you are going to introduce definitions stick by them. :)
“Communication speed. Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000).
That speed is a fixed consequence of our physiology. In contrast, software minds could be ported
to faster hardware, and could therefore process information more rapidly. (Of course, this also
depends on the efficiency of the algorithms in use; faster hardware compensates for less efficient
software.)”
This seems confusing… When we talk about the speed of computers we generally aren’t talking about signal propagation speed (which has been a large fraction of the speed of light in most computers, AFAIK). It hasn’t been something we have tried to optimise.
Having something with a fast signal propagation speed would allow for faster reaction times, but I’m not sure what other benefits you are suggesting that it would allow an AI to dominate humanity.
“Goal coordination. Let us call a set of AI copies or near-copies a “copy clan.” Given shared goals, a
copy clan would not face certain goal coordination problems that limit human effectiveness
(Friedman 1993). A human cannot use a hundredfold salary increase to purchase a hundredfold
increase in productive hours per day. But a copy clan, if its tasks are parallelizable, could do just
that. Any gains made by such a copy clan, or by a human or human organization controlling that
clan, could potentially be invested in further AI development, allowing initial advantages to
compound.”
This seems to neglect the overhead in normal co-ordination, e.g. who does what task. For example say you are doing research: you do a search on a subject and each copy takes one page of google scholar. They then follow interesting references. However these references are likely to overlap so you would get overlap of effort. And because the copy clones are likely to have the same interests, they are more likely to duplicate research compared to normal humans.
“Duplicability” : I’m sceptical of this to a certain extent. While it will lead to very good short-term gains, having lots of computer hardware that think the same way I think will cause some research avenues to be unexplored, due to all copies expecting that avenue to have minimal expected value (e.g. a billion einsteins might ignore quantum physics).
So I wouldn’t expect this sort of intelligence explosion to dominate the rest of humanity in research and science.
Surely this doesn’t increase intelligence just optimization power. If you are going to introduce definitions stick by them. :)
This jumped out at me as well, though I forgot about it when writing my other comment.
I think it’s important to distinguish between what I’d call “internal” and “external” resources. If we took the “intelligence = optimization power / resources” thing to literally mean all resources, it would mean that AIs couldn’t become more intelligent by simply adding hardware, which is arguably their strongest advantage over humans. It might also mean that bacteria could turn out to be “smarter” than humans—they can accomplish far fewer things, but they also use far less resources.
Intuitively, there’s a clear difference between a larger brain making a human more powerful than a dog (“internal resources”), and a larger bank account making a human more powerful than another human (“external resources”). Fortunately, this distinction emerges pretty easily from Legg & Hutter’s intelligence formalism. (Luke & Anna didn’t actually use the formalism, but the distinction emerging easily from the formalism suggests to me that the distinction actually carves reality at the joints and isn’t just an arbitrary one.)
The formalism is fundamentally pretty simple: there’s an agent, which receives a stream of observations about the environment, chosen from some set of symbols. In response, it chooses some action, again chosen from some (other) set of symbols, and gets some reward. Then it makes new observations and chooses new actions.
Legg & Hutter’s formalism treats the agent itself as a black box: it doesn’t care about how the agent reaches its conclusions, or for that matter, whether the agent does anything that could be called “reaching a conclusion” in the first place. It only looks at whether the agent is able to match its actions to the observations so as to produce the highest rewards. So “internal resources” would be things that go into that black box, only affecting the choices that the agent makes.
“External resources”, on the other hand, are things that affect the agent’s set of actions. Intuitively, a rich person can do things that a poor person can’t: for instance, buying a house that costs a million dollars. In Legg & Hutter’s formalism, the rich person would have a broader set of actions that they could choose from.
We can then rewrite “intelligence is optimization power divided by resources” as “intelligence is the optimization power of an agent when their set of available actions is held constant”. I think this is a pretty good match for our intuitive sense of “intelligence”. If you can solve a puzzle in one move by bribing the person who’s administering it, that might get you a higher amount of reward, but it doesn’t make you more intelligent than the person who doesn’t have that option and has to solve it the hard way. (If you wanted to be exact, you’d also need to hold the set of observations constant, so that e.g. a seeing person didn’t end up more intelligent than a blind one.)
ETA: Technically, the poor person does have the same set of actions available as the rich person—they too can claim to have a million dollars and try to tell their bank to transfer the money to the seller. It’s just that the same actions produce different consequences—if you don’t have the million dollars in your bank, the bank will refuse to obey you. But saying that they have different sets of actions gets close enough, I think.
I think this particular argument would dissolve away if the paper said “may allow AIs to acquire vastly more optimization power” instead of “vastly more intelligent”.
The key point here is not that AIs have more computational resources available than humans, but that they are (presumed) able to translate extra computational resource directly into extra cognitive power. So they can use that particular resource much more efficiently than humans.
EDIT: actually I’m confusing two concepts here. There’s “computer hardware”, which is an external resource that AIs are better at utilizing than we are. Then there’s “computational power” which AIs obtain from computer hardware and we obtain from our brains. This is an internal resource, and while I believe it’s what the paper was referring to as “increased computational resources”, I’m not sure it counts as a “resource” for the purposes of Yudkowsky’s definition of intelligence.
I have trouble with this definition to be honest. I can’t help be nit-picky. I don’t really like treating computers (including my own brain) as black box functions. I prefer to think of them as physical systems with many inputs and outputs (that are not obvious).
There are many actions I can’t do, I can’t consume 240V electricity or emit radio frequency em radiation. Is a power supply an external resource for a computer, are my hands an external resource for my brain (they allow more actions)?
Cutting off my hands would severely curtail my ability to solve problems unrelated to actually having hands (no more making notes on problems and typing a program to solve a problem would be a little bit trickier)
Okay so lets try a thought experiment: Give the AI a human body with a silicon brain that runs off the glucose in the blood supply. Brains use 20 watts or so (my Core 2 Duo laptop is about 12W when not doing much (although that includes a screen)). Give it no ethernet port, no wifi. Give it eyes and ears with that take the same data as a human. Then we could try to compare it roughly with a humans capabilities, to discover whether it is more intelligent or not. One major issue; If it doesn’t perform the correct autonomic functions of the human brain (breathing etc), the body is likely to die and not be able to solve many problems. It is this kind of context sensitivity that makes me despair at trying to pin an intelligence number on a system.
However this model isn’t even very useful for predicting the future. Computers do have gigabit ethernet, they can easily expand to take more power. Even if it took an age to learn how to control a pen to answer questions it doesn’t help us.
This is unsatisfactory. I’ll have to think about this issue some more.
Surely this doesn’t increase intelligence just optimization power. If you are going to introduce definitions stick by them. :)
This seems confusing… When we talk about the speed of computers we generally aren’t talking about signal propagation speed (which has been a large fraction of the speed of light in most computers, AFAIK). It hasn’t been something we have tried to optimise.
Having something with a fast signal propagation speed would allow for faster reaction times, but I’m not sure what other benefits you are suggesting that it would allow an AI to dominate humanity.
This seems to neglect the overhead in normal co-ordination, e.g. who does what task. For example say you are doing research: you do a search on a subject and each copy takes one page of google scholar. They then follow interesting references. However these references are likely to overlap so you would get overlap of effort. And because the copy clones are likely to have the same interests, they are more likely to duplicate research compared to normal humans.
“Duplicability” : I’m sceptical of this to a certain extent. While it will lead to very good short-term gains, having lots of computer hardware that think the same way I think will cause some research avenues to be unexplored, due to all copies expecting that avenue to have minimal expected value (e.g. a billion einsteins might ignore quantum physics).
So I wouldn’t expect this sort of intelligence explosion to dominate the rest of humanity in research and science.
This jumped out at me as well, though I forgot about it when writing my other comment.
I think it’s important to distinguish between what I’d call “internal” and “external” resources. If we took the “intelligence = optimization power / resources” thing to literally mean all resources, it would mean that AIs couldn’t become more intelligent by simply adding hardware, which is arguably their strongest advantage over humans. It might also mean that bacteria could turn out to be “smarter” than humans—they can accomplish far fewer things, but they also use far less resources.
Intuitively, there’s a clear difference between a larger brain making a human more powerful than a dog (“internal resources”), and a larger bank account making a human more powerful than another human (“external resources”). Fortunately, this distinction emerges pretty easily from Legg & Hutter’s intelligence formalism. (Luke & Anna didn’t actually use the formalism, but the distinction emerging easily from the formalism suggests to me that the distinction actually carves reality at the joints and isn’t just an arbitrary one.)
The formalism is fundamentally pretty simple: there’s an agent, which receives a stream of observations about the environment, chosen from some set of symbols. In response, it chooses some action, again chosen from some (other) set of symbols, and gets some reward. Then it makes new observations and chooses new actions.
Legg & Hutter’s formalism treats the agent itself as a black box: it doesn’t care about how the agent reaches its conclusions, or for that matter, whether the agent does anything that could be called “reaching a conclusion” in the first place. It only looks at whether the agent is able to match its actions to the observations so as to produce the highest rewards. So “internal resources” would be things that go into that black box, only affecting the choices that the agent makes.
“External resources”, on the other hand, are things that affect the agent’s set of actions. Intuitively, a rich person can do things that a poor person can’t: for instance, buying a house that costs a million dollars. In Legg & Hutter’s formalism, the rich person would have a broader set of actions that they could choose from.
We can then rewrite “intelligence is optimization power divided by resources” as “intelligence is the optimization power of an agent when their set of available actions is held constant”. I think this is a pretty good match for our intuitive sense of “intelligence”. If you can solve a puzzle in one move by bribing the person who’s administering it, that might get you a higher amount of reward, but it doesn’t make you more intelligent than the person who doesn’t have that option and has to solve it the hard way. (If you wanted to be exact, you’d also need to hold the set of observations constant, so that e.g. a seeing person didn’t end up more intelligent than a blind one.)
ETA: Technically, the poor person does have the same set of actions available as the rich person—they too can claim to have a million dollars and try to tell their bank to transfer the money to the seller. It’s just that the same actions produce different consequences—if you don’t have the million dollars in your bank, the bank will refuse to obey you. But saying that they have different sets of actions gets close enough, I think.
I think this particular argument would dissolve away if the paper said “may allow AIs to acquire vastly more optimization power” instead of “vastly more intelligent”.
The key point here is not that AIs have more computational resources available than humans, but that they are (presumed) able to translate extra computational resource directly into extra cognitive power. So they can use that particular resource much more efficiently than humans.
EDIT: actually I’m confusing two concepts here. There’s “computer hardware”, which is an external resource that AIs are better at utilizing than we are. Then there’s “computational power” which AIs obtain from computer hardware and we obtain from our brains. This is an internal resource, and while I believe it’s what the paper was referring to as “increased computational resources”, I’m not sure it counts as a “resource” for the purposes of Yudkowsky’s definition of intelligence.
I have trouble with this definition to be honest. I can’t help be nit-picky. I don’t really like treating computers (including my own brain) as black box functions. I prefer to think of them as physical systems with many inputs and outputs (that are not obvious).
There are many actions I can’t do, I can’t consume 240V electricity or emit radio frequency em radiation. Is a power supply an external resource for a computer, are my hands an external resource for my brain (they allow more actions)?
Cutting off my hands would severely curtail my ability to solve problems unrelated to actually having hands (no more making notes on problems and typing a program to solve a problem would be a little bit trickier)
Okay so lets try a thought experiment: Give the AI a human body with a silicon brain that runs off the glucose in the blood supply. Brains use 20 watts or so (my Core 2 Duo laptop is about 12W when not doing much (although that includes a screen)). Give it no ethernet port, no wifi. Give it eyes and ears with that take the same data as a human. Then we could try to compare it roughly with a humans capabilities, to discover whether it is more intelligent or not. One major issue; If it doesn’t perform the correct autonomic functions of the human brain (breathing etc), the body is likely to die and not be able to solve many problems. It is this kind of context sensitivity that makes me despair at trying to pin an intelligence number on a system.
However this model isn’t even very useful for predicting the future. Computers do have gigabit ethernet, they can easily expand to take more power. Even if it took an age to learn how to control a pen to answer questions it doesn’t help us.
This is unsatisfactory. I’ll have to think about this issue some more.