We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
General intelligence—defined as the ability to acquire, organize, and apply information—is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.
Even if we postulate an increasing difficulty or threshold of “contemplative-productivity” per new “layer” of intelligence, the following remains true: Any AGI which is designed as more “intelligent” than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this. This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence’s magnitude.
Furthermore; as to the statements about evolution—evolutionary biology maximizes/optimizes for specific constraints that we humans in designing do not do. There is no evolutionary equivalent of the atomic bomb, nor of the Saturn-V rocket. Evolution furthermore typically will retain “just enough” of a given trait to “justify” the energy-cost of maintaining said trait.
Evolutionarily speaking, abstracting general intelligence has been around less than the blink of an eye.
I don’t know that your position is coherent given these points. (Though I do want to point out for re-emphasis that I nowhere in this stated that seed-AGI is either likely or unlikely.)
General intelligence—defined as the ability to acquire, organize, and apply information—is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.
Intelligence is instrumentally useful but it comes at cost. Note that only a few tens of species have developed intelligence. This suggests that intelligence is in general costly. Even if more intelligence helps an AI’s goals more that doesn’t mean that acquiring more intelligence is easy or worth the effort.
Any AGI which is designed as more “intelligent” than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this.
Yes, but I don’t think many people seriously doubt this. Humans will likely to do this in a few years even without any substantial AGI work simply by genetic engineering and/or implants.
This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence’s magnitude.
This does not follow. It could be that it gets more and more difficult to design a superior intelligence. There may be diminishing marginal returns. (See my comment elsewhere in this thread for one possible example of what could go wrong.)
Certainly. But evolved intelligences do not optimize for intelligence. They optimize for perpetuation of the genome. Constructed intelligence allows for systems that are optimized for intelligence. This was what I was getting at with the mentions of the fact that evolution does not optimize for what we optimize for; that there is no evolutionary equivalent of the atom bomb nor the Saturn-V rocket.
So mentioning “ways that can go wrong” and reinforcing that point with evolutionary precedent seems to be rather missing the point. It’s apples-to-oranges.
After all: even if there is diminishing return on investment in terms of invested energy to achieve a more-intelligent design, once that new design is achieved it can be replicated essentially indefinitely.
Taking energy to get there isn’t what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn’t optimize for intelligence but for other goals isn’t really relevant, given that an AGI presumably won’t optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
given that an AGI presumably won’t optimize itself for intelligence
By the very dint of the fact that it is designed for the purpose of being intelligent, any AGI conceivably constructed by men would be optimized for intelligence; this seems a rather routinely heritable phenomenon. While a paperclip optimizer itself might not seek to optimize itself for intelligence, if we postulate that it is in the business of making a ‘smarter’ paperclip optimizer it will optimize for intelligence. Of course we cannot know the agency of any given point in the sequence; whether it will “make the choice” to recurse upwards.
That being said, there’s a real “non-sequitor” here to your dialogue, insofar as I can see. “The relevant issue is that being intelligent takes a lot of resources up”. -- Compared to what, exactly? Roughly 2⁄3 of our caloric intake goes to our brain. Our brain is not well-optimized for intelligence. “[...] given that an AGI presumably won’t optimize itself for intelligence”—but whatever designed that AGI would have.
The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
I strongly disagree. The basic point is deeply flawed. I’ve already tried to say this repeatedly: evolution does not optimize for intelligence. Pointing at evolution’s history with intelligence and saying, “aha! Optimization finds intelligence expensive!” is missing the point altogether: evolution should find intelligence expensive. It doesn’t match what evolution “does”. Evolution ‘seeks’ stable local minima to perpetuate replication of the genome. That is all it does. Intelligence isn’t integral to that process; humans didn’t need to be any more intelligent than we are in order to reach our local minima of perpetuation, so we didn’t evolve any more intelligence.
To attempt to extrapolate from that to what intelligence-seeking designers would achieve is missing the point on a very deep level: to extrapolate correctly from the ‘lessons’ evolution would ‘teach us’, one would have to postulate a severe selection pressure favoring intelligence.
I don’t see how you’re doing that.
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
I don’t understand your remark. No one is going to make an AGI whose goal is to become as intelligent as possible. Evolution is thus in this context one type of optimizer. Whatever one is optimizing for, becoming as intelligent as possible won’t generally be the optimal thing to do even if becoming more intelligent does help achieve its goals more.
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Are you trying to make sure a bad Singularity happens?
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That’s why Azathoth doesn’t use except in a few very bright species.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Are you trying to make sure a bad Singularity happens?
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”.
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
(This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied enviornments. Unfortunately, it is a resource intensive tool.
Given the available routes to general intelligence available to “the blind idiot god” due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is ‘extraneous’ to evolution was because evolution only ‘seeks out’ local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who “went back to the trees”.)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
Not at all. Not even remotely. I’m stating that any agent—a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices—inherently values intelligence; the capacity to ‘make good choices’. A more-intelligent agent is a ‘superior’ agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are “designed to be intelligent”. (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
I’m not sure what you mean here.
What gives you trouble with it? Try rephrasing it and I’ll ‘correct’ said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)
Even if we postulate an increasing difficulty or threshold of “contemplative-productivity” per new “layer” of intelligence, the following remains true: Any AGI which is designed as more “intelligent” than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this. This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence’s magnitude.
There isn’t the need to have infinite recursion. Even if there is some cap on intelligence (due to resources optimization or something else we have yet to discover), the risk is still there if the cap isn’t exceptionally near the human level. If the AI is to us what we are to chimps it may very well be enough.
Or, frankly, recursion at all. Say we can’t make anything smarter than humans… but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average “brilliant” guy with no morals and the ability to accelerate as only solid-state equipment can… is frankly pretty damned scary all on its own.
(You could also count, under some auspices, “intelligence explosion” as meaning “an explosion in the number of intelligences”. Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human’s mental contributions have? What, then, of ‘intellectual labor’? Or manual labor?)
In addition, supposing the AI is slightly smarter than humans and can easily replicate itself,
Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).
Could you expand this a little further. I’m not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.
ETA: what it used to say at the edit was “faction of the human population (>1% at the very least)” TheOtherDave corrected my mis-estimate.
TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.
1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.
2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.
3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.
I’m not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.
The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.
Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn’t worry about them taking over the world. Unless nobody else important knew about this capacity.
AI is a serious risk, but let’s not underestimate how hard it is to be as capable as a Straumli Perversion.
I don’t have a clear sense of how dangerous a group of amoral fast-thinking miniature Isaac Newtons might be but it would surprise me if there were a particularly important risk-evaluation threshold crossed between 70 million amoral fast-thinking miniature Isaac Newtons and a mere, say, 700,000 of them.
Admittedly, I may be being distracted by the image of hundreds of thousands of miniature Isaac Newtons descending on Washington DC or something. It’s a far more entertaining idea than those interminable zombie stories.
You are right that 1% of the world population is likely too large. I probably should have said “substantial numbers in existence.” I’ve adjusted my estimate, so amoral Newtons don’t worry me unless they are secret or exist (>1000). And the minimum number gets bigger unless there is reason to think amoral Newtons will cooperate amongst themselves to dominate humanity.
General intelligence—defined as the ability to acquire, organize, and apply information—is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.
Even if we postulate an increasing difficulty or threshold of “contemplative-productivity” per new “layer” of intelligence, the following remains true: Any AGI which is designed as more “intelligent” than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this. This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence’s magnitude.
Furthermore; as to the statements about evolution—evolutionary biology maximizes/optimizes for specific constraints that we humans in designing do not do. There is no evolutionary equivalent of the atomic bomb, nor of the Saturn-V rocket. Evolution furthermore typically will retain “just enough” of a given trait to “justify” the energy-cost of maintaining said trait.
Evolutionarily speaking, abstracting general intelligence has been around less than the blink of an eye.
I don’t know that your position is coherent given these points. (Though I do want to point out for re-emphasis that I nowhere in this stated that seed-AGI is either likely or unlikely.)
Intelligence is instrumentally useful but it comes at cost. Note that only a few tens of species have developed intelligence. This suggests that intelligence is in general costly. Even if more intelligence helps an AI’s goals more that doesn’t mean that acquiring more intelligence is easy or worth the effort.
Yes, but I don’t think many people seriously doubt this. Humans will likely to do this in a few years even without any substantial AGI work simply by genetic engineering and/or implants.
This does not follow. It could be that it gets more and more difficult to design a superior intelligence. There may be diminishing marginal returns. (See my comment elsewhere in this thread for one possible example of what could go wrong.)
We seem to be talking past one another. Why do you speak in terms of evolution, where I was discussing engineered intelligence?
I’m only discussing evolved intelligences to make the point that intelligence seems to be costly from a resource perspective.
Certainly. But evolved intelligences do not optimize for intelligence. They optimize for perpetuation of the genome. Constructed intelligence allows for systems that are optimized for intelligence. This was what I was getting at with the mentions of the fact that evolution does not optimize for what we optimize for; that there is no evolutionary equivalent of the atom bomb nor the Saturn-V rocket.
So mentioning “ways that can go wrong” and reinforcing that point with evolutionary precedent seems to be rather missing the point. It’s apples-to-oranges.
After all: even if there is diminishing return on investment in terms of invested energy to achieve a more-intelligent design, once that new design is achieved it can be replicated essentially indefinitely.
Taking energy to get there isn’t what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn’t optimize for intelligence but for other goals isn’t really relevant, given that an AGI presumably won’t optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
By the very dint of the fact that it is designed for the purpose of being intelligent, any AGI conceivably constructed by men would be optimized for intelligence; this seems a rather routinely heritable phenomenon. While a paperclip optimizer itself might not seek to optimize itself for intelligence, if we postulate that it is in the business of making a ‘smarter’ paperclip optimizer it will optimize for intelligence. Of course we cannot know the agency of any given point in the sequence; whether it will “make the choice” to recurse upwards.
That being said, there’s a real “non-sequitor” here to your dialogue, insofar as I can see. “The relevant issue is that being intelligent takes a lot of resources up”. -- Compared to what, exactly? Roughly 2⁄3 of our caloric intake goes to our brain. Our brain is not well-optimized for intelligence. “[...] given that an AGI presumably won’t optimize itself for intelligence”—but whatever designed that AGI would have.
I strongly disagree. The basic point is deeply flawed. I’ve already tried to say this repeatedly: evolution does not optimize for intelligence. Pointing at evolution’s history with intelligence and saying, “aha! Optimization finds intelligence expensive!” is missing the point altogether: evolution should find intelligence expensive. It doesn’t match what evolution “does”. Evolution ‘seeks’ stable local minima to perpetuate replication of the genome. That is all it does. Intelligence isn’t integral to that process; humans didn’t need to be any more intelligent than we are in order to reach our local minima of perpetuation, so we didn’t evolve any more intelligence.
To attempt to extrapolate from that to what intelligence-seeking designers would achieve is missing the point on a very deep level: to extrapolate correctly from the ‘lessons’ evolution would ‘teach us’, one would have to postulate a severe selection pressure favoring intelligence.
I don’t see how you’re doing that.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
I don’t understand your remark. No one is going to make an AGI whose goal is to become as intelligent as possible. Evolution is thus in this context one type of optimizer. Whatever one is optimizing for, becoming as intelligent as possible won’t generally be the optimal thing to do even if becoming more intelligent does help achieve its goals more.
I would.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
Are you trying to make sure a bad Singularity happens?
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That’s why Azathoth doesn’t use except in a few very bright species.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
I’m not sure what you mean here.
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
It isn’t automatically bad. I just don’t want it. This is why I said your answer is legitimately “No”.
Fair enough.
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
Potentially, depending on the simulated environment.
Assume Earth-like or video-game-like (in the latter-case including ‘respawns’).
Video game upgrades! Sounds good.
I believe you mean, “I’m here to kick ass and chew bubblegum… and I’m all outta gum!”
Given the available routes to general intelligence available to “the blind idiot god” due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is ‘extraneous’ to evolution was because evolution only ‘seeks out’ local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who “went back to the trees”.)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
Not at all. Not even remotely. I’m stating that any agent—a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices—inherently values intelligence; the capacity to ‘make good choices’. A more-intelligent agent is a ‘superior’ agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are “designed to be intelligent”. (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
What gives you trouble with it? Try rephrasing it and I’ll ‘correct’ said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)
There isn’t the need to have infinite recursion. Even if there is some cap on intelligence (due to resources optimization or something else we have yet to discover), the risk is still there if the cap isn’t exceptionally near the human level. If the AI is to us what we are to chimps it may very well be enough.
Or, frankly, recursion at all. Say we can’t make anything smarter than humans… but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average “brilliant” guy with no morals and the ability to accelerate as only solid-state equipment can… is frankly pretty damned scary all on its own.
(You could also count, under some auspices, “intelligence explosion” as meaning “an explosion in the number of intelligences”. Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human’s mental contributions have? What, then, of ‘intellectual labor’? Or manual labor?)
Good point.
In addition, supposing the AI is slightly smarter than humans and can easily replicate itself, Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).
Could you expand this a little further. I’m not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.
ETA: what it used to say at the edit was “faction of the human population (>1% at the very least)” TheOtherDave corrected my mis-estimate.
have you read that alien message? http://lesswrong.com/lw/qk/that_alien_message/
TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.
1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.
2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.
3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.
I’m not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.
The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.
Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn’t worry about them taking over the world. Unless nobody else important knew about this capacity.
AI is a serious risk, but let’s not underestimate how hard it is to be as capable as a Straumli Perversion.
the higher average does not mean that they were not within the normal range. they are not individually super human.
I don’t have a clear sense of how dangerous a group of amoral fast-thinking miniature Isaac Newtons might be but it would surprise me if there were a particularly important risk-evaluation threshold crossed between 70 million amoral fast-thinking miniature Isaac Newtons and a mere, say, 700,000 of them.
Admittedly, I may be being distracted by the image of hundreds of thousands of miniature Isaac Newtons descending on Washington DC or something. It’s a far more entertaining idea than those interminable zombie stories.
You are right that 1% of the world population is likely too large. I probably should have said “substantial numbers in existence.” I’ve adjusted my estimate, so amoral Newtons don’t worry me unless they are secret or exist (>1000). And the minimum number gets bigger unless there is reason to think amoral Newtons will cooperate amongst themselves to dominate humanity.
I don’t think the numbers I was referencing quite came across to you.
I was postulating humans:AGIs :: 1:10,000
So not 70,000 Newtons or 70 million Newtons -- 70,000 Billion Newtons.