Or, frankly, recursion at all. Say we can’t make anything smarter than humans… but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average “brilliant” guy with no morals and the ability to accelerate as only solid-state equipment can… is frankly pretty damned scary all on its own.
(You could also count, under some auspices, “intelligence explosion” as meaning “an explosion in the number of intelligences”. Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human’s mental contributions have? What, then, of ‘intellectual labor’? Or manual labor?)
In addition, supposing the AI is slightly smarter than humans and can easily replicate itself,
Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).
Could you expand this a little further. I’m not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.
ETA: what it used to say at the edit was “faction of the human population (>1% at the very least)” TheOtherDave corrected my mis-estimate.
TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.
1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.
2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.
3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.
I’m not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.
The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.
Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn’t worry about them taking over the world. Unless nobody else important knew about this capacity.
AI is a serious risk, but let’s not underestimate how hard it is to be as capable as a Straumli Perversion.
I don’t have a clear sense of how dangerous a group of amoral fast-thinking miniature Isaac Newtons might be but it would surprise me if there were a particularly important risk-evaluation threshold crossed between 70 million amoral fast-thinking miniature Isaac Newtons and a mere, say, 700,000 of them.
Admittedly, I may be being distracted by the image of hundreds of thousands of miniature Isaac Newtons descending on Washington DC or something. It’s a far more entertaining idea than those interminable zombie stories.
You are right that 1% of the world population is likely too large. I probably should have said “substantial numbers in existence.” I’ve adjusted my estimate, so amoral Newtons don’t worry me unless they are secret or exist (>1000). And the minimum number gets bigger unless there is reason to think amoral Newtons will cooperate amongst themselves to dominate humanity.
Or, frankly, recursion at all. Say we can’t make anything smarter than humans… but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average “brilliant” guy with no morals and the ability to accelerate as only solid-state equipment can… is frankly pretty damned scary all on its own.
(You could also count, under some auspices, “intelligence explosion” as meaning “an explosion in the number of intelligences”. Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human’s mental contributions have? What, then, of ‘intellectual labor’? Or manual labor?)
Good point.
In addition, supposing the AI is slightly smarter than humans and can easily replicate itself, Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).
Could you expand this a little further. I’m not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.
ETA: what it used to say at the edit was “faction of the human population (>1% at the very least)” TheOtherDave corrected my mis-estimate.
have you read that alien message? http://lesswrong.com/lw/qk/that_alien_message/
TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.
1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.
2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.
3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.
I’m not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.
The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.
Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn’t worry about them taking over the world. Unless nobody else important knew about this capacity.
AI is a serious risk, but let’s not underestimate how hard it is to be as capable as a Straumli Perversion.
the higher average does not mean that they were not within the normal range. they are not individually super human.
I don’t have a clear sense of how dangerous a group of amoral fast-thinking miniature Isaac Newtons might be but it would surprise me if there were a particularly important risk-evaluation threshold crossed between 70 million amoral fast-thinking miniature Isaac Newtons and a mere, say, 700,000 of them.
Admittedly, I may be being distracted by the image of hundreds of thousands of miniature Isaac Newtons descending on Washington DC or something. It’s a far more entertaining idea than those interminable zombie stories.
You are right that 1% of the world population is likely too large. I probably should have said “substantial numbers in existence.” I’ve adjusted my estimate, so amoral Newtons don’t worry me unless they are secret or exist (>1000). And the minimum number gets bigger unless there is reason to think amoral Newtons will cooperate amongst themselves to dominate humanity.
I don’t think the numbers I was referencing quite came across to you.
I was postulating humans:AGIs :: 1:10,000
So not 70,000 Newtons or 70 million Newtons -- 70,000 Billion Newtons.