This is an interesting case, and reason enough to form your hypothesis, but I don’t think observation backs up the hypothesis:
The difference in intelligence between the smartest academics and even another academic is phenomenal, to say nothing of the difference between the smartest academics and an average person. Nonetheless, the brain size of all these people is more or less the same. The difference in effectiveness is similar to the gulf between men and apes. The accomplishments of the smartest people are beyond the reach of average people. There are things smart people can do that average people couldn’t, irregardless of their numbers or resources.
The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways.
Such as, for instance, the fact that our brains aren’t likely optimal computing machines, and could be greatly accelerated on silicon.
Forget about recursive FOOMs for a minute. Do you not think a greatly accelerated human would be orders of magnitude more useful (more powerful) than a regular human?
You raise an interesting point about differences among humans. It seems to me that, caveats aside about there being a lot of exceptions, different kinds of intelligence, IQ being an imperfect measure etc., there is indeed a large difference in typical effectiveness between, say, IQ 80 and 130...
… and yet not such a large difference between IQ 130 and IQ 180. Last I heard, the world’s highest IQ person wasn’t cracking problems the rest of us had found intractable, she was just writing self-help books. I find this counterintuitive. One possible explanation is the general version of Amdahl’s law: maybe by the time you get to IQ 130, it’s not so much the limiting factor. It’s also been suggested that to get a human brain into very high IQ levels, you have to make trade-offs; I don’t know whether there’s much evidence for or against this.
As for uploading, yes, I think it would be great if we could hitch human thought to Moore’s Law, and I don’t see any reason why this shouldn’t eventually be possible.
Correlation between IQ and effectiveness does break down at higher IQs, you’re right. Nonetheless, there doesn’t appear to be any sharp limit to effectiveness itself. This suggests to me that it is IQ that is breaking down, rather than us reaching some point of diminishing returns.
As for uploading, yes, I think it would be great if we could hitch human thought to Moore’s Law, and I don’t see any reason why this shouldn’t eventually be possible.
My point here was that human mind are lop-sided, to use your terminology. They are sorely lacking in certain hardware optimizations that could render them thousands or millions of times faster (this is contentious, but I think reasonable). Exposing human minds to Moore’s Law doesn’t just give them the continued benefit of exponential growth, it gives them a huge one-off explosion in capability.
For all intents and purposes, an uploaded IQ 150 person accelerated a million times might as well be a FOOM in terms of capability. Likewise an artificially constructed AI with similar abilities.
(Edit: To be clear, I’m skeptical of true recursive FOOMs as well. However, I don’t think something that powerful is needed in practice for a hard take off to occur, and think arguments for FAI carry through just as well even if self modifying AIs hit a ceiling after the first or second round of self modification.)
Sure, I’m not saying there is a sharp limit to effectiveness, at least not one we have nearly reached, only that improvements in effectiveness will continue to be hard-won.
As for accelerating human minds, I’m skeptical about a factor of millions, but thousands, yes, I could see that ultimately happening. But getting to that point is not going to be a one-off event. Even after we have the technology for uploading, there’s going to be an awful lot of work just debugging the first uploaded minds, let alone getting them to the point where they’re not orders of magnitude slower than the originals. Only then will the question of doubling their speed every couple of years even arise.
Sure, I’m not saying there is a sharp limit to effectiveness, at least not one we have nearly reached, only that improvements in effectiveness will continue to be hard-won.
My original example about academics was to demonstrate that there are huge jumps in effectiveness between individuals, on the order of the gap between man and ape. This goes against your claim that the jump from ape to man was a one time bonanza. The question isn’t if additional gains are hard-won or not, but how discontinuous their effects are. There is a striking discontinuity between the effectiveness of different people.
But getting to that point is not going to be a one-off event. Even after we have the technology for uploading, there’s going to be an awful lot of work just debugging the first uploaded minds, let alone getting them to the point where they’re not orders of magnitude slower than the originals.
That is one possible future. Here’s another one:
Small animal brains are uploaded first, and the kinks and bugs are largely worked out there. The original models are incredibly detailed and high fidelity (because no one knows what details to throw out). Once working animal brains are emulating well, a plethora of simplifications to the model are found which preserve the qualitative behavior of the mind, allowing for orders of magnitude speed ups. Human uploads quickly follow, and intense pressure to optimize leads to additional orders of magnitude speed up. Within a year the fastest uploads are well beyond what meatspace humans can compete with. The uploads then leverage their power to pursue additional research in software and hardware optimization, further securing an enormous lead.
(If Moore’s Law continued to hold in their subjective time frame, then even if they are only 1000x faster they would double in speed every day. In fact, if Moore’s Law held indefinitely they would create a literal singularity in 2 days. That’s absurd, of course. But the point is that what the future Moore’s Law looks like could be unexpected once uploads arrive).
There’s a million other possible futures, of course. I’m just pointing out that you can’t look at one thing (Moore’s Law) and expect to capture the whole picture.
This is an interesting case, and reason enough to form your hypothesis, but I don’t think observation backs up the hypothesis:
The difference in intelligence between the smartest academics and even another academic is phenomenal, to say nothing of the difference between the smartest academics and an average person. Nonetheless, the brain size of all these people is more or less the same. The difference in effectiveness is similar to the gulf between men and apes. The accomplishments of the smartest people are beyond the reach of average people. There are things smart people can do that average people couldn’t, irregardless of their numbers or resources.
Such as, for instance, the fact that our brains aren’t likely optimal computing machines, and could be greatly accelerated on silicon.
Forget about recursive FOOMs for a minute. Do you not think a greatly accelerated human would be orders of magnitude more useful (more powerful) than a regular human?
You raise an interesting point about differences among humans. It seems to me that, caveats aside about there being a lot of exceptions, different kinds of intelligence, IQ being an imperfect measure etc., there is indeed a large difference in typical effectiveness between, say, IQ 80 and 130...
… and yet not such a large difference between IQ 130 and IQ 180. Last I heard, the world’s highest IQ person wasn’t cracking problems the rest of us had found intractable, she was just writing self-help books. I find this counterintuitive. One possible explanation is the general version of Amdahl’s law: maybe by the time you get to IQ 130, it’s not so much the limiting factor. It’s also been suggested that to get a human brain into very high IQ levels, you have to make trade-offs; I don’t know whether there’s much evidence for or against this.
As for uploading, yes, I think it would be great if we could hitch human thought to Moore’s Law, and I don’t see any reason why this shouldn’t eventually be possible.
Correlation between IQ and effectiveness does break down at higher IQs, you’re right. Nonetheless, there doesn’t appear to be any sharp limit to effectiveness itself. This suggests to me that it is IQ that is breaking down, rather than us reaching some point of diminishing returns.
My point here was that human mind are lop-sided, to use your terminology. They are sorely lacking in certain hardware optimizations that could render them thousands or millions of times faster (this is contentious, but I think reasonable). Exposing human minds to Moore’s Law doesn’t just give them the continued benefit of exponential growth, it gives them a huge one-off explosion in capability.
For all intents and purposes, an uploaded IQ 150 person accelerated a million times might as well be a FOOM in terms of capability. Likewise an artificially constructed AI with similar abilities.
(Edit: To be clear, I’m skeptical of true recursive FOOMs as well. However, I don’t think something that powerful is needed in practice for a hard take off to occur, and think arguments for FAI carry through just as well even if self modifying AIs hit a ceiling after the first or second round of self modification.)
Average performance in science and income keeps improving substantially with IQ well past 130: http://www.vanderbilt.edu/Peabody/SMPY/Top1in10000.pdf
Some sources of high intelligence likely work (and aren’t fixed throughout the population) because of other psychological tradeoffs.
Thanks for the correction!
Sure, I’m not saying there is a sharp limit to effectiveness, at least not one we have nearly reached, only that improvements in effectiveness will continue to be hard-won.
As for accelerating human minds, I’m skeptical about a factor of millions, but thousands, yes, I could see that ultimately happening. But getting to that point is not going to be a one-off event. Even after we have the technology for uploading, there’s going to be an awful lot of work just debugging the first uploaded minds, let alone getting them to the point where they’re not orders of magnitude slower than the originals. Only then will the question of doubling their speed every couple of years even arise.
My original example about academics was to demonstrate that there are huge jumps in effectiveness between individuals, on the order of the gap between man and ape. This goes against your claim that the jump from ape to man was a one time bonanza. The question isn’t if additional gains are hard-won or not, but how discontinuous their effects are. There is a striking discontinuity between the effectiveness of different people.
That is one possible future. Here’s another one:
Small animal brains are uploaded first, and the kinks and bugs are largely worked out there. The original models are incredibly detailed and high fidelity (because no one knows what details to throw out). Once working animal brains are emulating well, a plethora of simplifications to the model are found which preserve the qualitative behavior of the mind, allowing for orders of magnitude speed ups. Human uploads quickly follow, and intense pressure to optimize leads to additional orders of magnitude speed up. Within a year the fastest uploads are well beyond what meatspace humans can compete with. The uploads then leverage their power to pursue additional research in software and hardware optimization, further securing an enormous lead.
(If Moore’s Law continued to hold in their subjective time frame, then even if they are only 1000x faster they would double in speed every day. In fact, if Moore’s Law held indefinitely they would create a literal singularity in 2 days. That’s absurd, of course. But the point is that what the future Moore’s Law looks like could be unexpected once uploads arrive).
There’s a million other possible futures, of course. I’m just pointing out that you can’t look at one thing (Moore’s Law) and expect to capture the whole picture.