Could democritus have predicted an Intelligence Explosion?
Only if he had been superhumanly able to follow arguments through to the maximum extent of their consequences.
The number and complexity of the intervening steps, listed by morendil, put “FOOM from Atoms alone” firmly beyond the predictive power from limited evidence exhibited by any historic person who in the end turned out to be right. Which is to say, beyond Democritus, as he perhaps marks the upper end of this scale. There is good reason to believe that there is not sufficient working memory available to members of species Homo Sapiens to discover “FOOM from Atoms alone.” I mean, none of the greatest scientists were near that good.
Intelligence explosion follows from physicalism and scientific progress and not much else.
I agree that physicalism affords the possibility of an Intelligence Explosion from the observed existence of minds, and the physical limits to the size and power of possible minds, compared to the size and power of observed minds.
The laws of physics allow minds.
The laws of physics put some upper limit on the size and power of minds, the limits are many orders of magnitude greater than those of observed minds.
Therefore, there is plenty of room above us.
From this we can conclude that superintelligent minds are permitted by physics,
but not necessarily that one can jump from intelligence to superintelligence through an intelligence explosion.
We know something about physical limits of Minds. Less is known about the limits to the progress of science, and it’s more relevant proxy, engineering. We do not know if the maximally powerful engineering available to evolved minds is above the level required for initiating the chain-reaction of recursive self-improvement, initiating an intelligence explosion towards the physical limits for minds.
This begs the question: Is the ceiling of engineering prowess available to evolved minds lower
than the floor of engineering prowess required for recursive self improvement?
To the best of my knowledge, this question is undecided.
Is there good evidence in one direction or the other?
Superintelligence by way of a controlled Intelligence Explosion seems harder than General Artificial Intelligence by Uploading by a large amount. Like intergalactic travel is harder than interstellar travel by a large amount.
An Intelligence Explosion may permitted under physicalism, yet be prohibited by cost and complexity, like intergalactic travel is permitted under physicalism, yet prohibited by cost and engineering complexity.
Thereby saving minds such as we, around the minimum level of intelligence required for general intelligence, from being displaced by minds that are powerful to the maximum degree that the limits-of-physics allow.
Safety wise initiating an Intelligence Explosion seems around as safe as going to the moon by riding on the heat and pressure of a nuclear chain reaction instead of on the tip of a large chemical rocket.
From this we can conclude that a superintelligent minds are permitted by physics, but not necessarily that one can jump from intelligence to superintelligence through an intelligence explosion.
I still don’t perceive it to be anywhere close to being “obvious” that an intelligence explosion, as opposed to learning and intelligence amplification, is more than a possibility (in the sense that is far from being certain).
intelligence explosion, as opposed to learning and intelligence amplification
It seems like a false dichotomy to me. “Explosion” in this context surely just refers to rapid exponential growth—as seen in an a nuclear explosion. We are seeing that already in functional intelligence, and it is happening through education and augmentation.
Right! I started assuming Intelligence Explosion followed from physicalism, since Luke is a cool guy, and he said so. I think I changed my mind somewhere in the middle of writing the post above. Maybe stop at “Oh! Superintelligence!” instead of also “Oh! Intelligence explosion!”, but Michael Vassar is smarter than me, so I’m probably missing something.
Oh. Crap. I think I reversed my position again.
Why? Well, a sufficiently powerful mind that is given enough time should explode or extinguish.
Could democritus have predicted an Intelligence Explosion?
Only if he had been superhumanly able to follow arguments through to the maximum extent of their consequences.
The number and complexity of the intervening steps, listed by morendil, put “FOOM from Atoms alone” firmly beyond the predictive power from limited evidence exhibited by any historic person who in the end turned out to be right. Which is to say, beyond Democritus, as he perhaps marks the upper end of this scale. There is good reason to believe that there is not sufficient working memory available to members of species Homo Sapiens to discover “FOOM from Atoms alone.” I mean, none of the greatest scientists were near that good.
I agree that physicalism affords the possibility of an Intelligence Explosion from the observed existence of minds, and the physical limits to the size and power of possible minds, compared to the size and power of observed minds.
The laws of physics allow minds.
The laws of physics put some upper limit on the size and power of minds, the limits are many orders of magnitude greater than those of observed minds.
Therefore, there is plenty of room above us.
From this we can conclude that superintelligent minds are permitted by physics, but not necessarily that one can jump from intelligence to superintelligence through an intelligence explosion.
We know something about physical limits of Minds. Less is known about the limits to the progress of science, and it’s more relevant proxy, engineering. We do not know if the maximally powerful engineering available to evolved minds is above the level required for initiating the chain-reaction of recursive self-improvement, initiating an intelligence explosion towards the physical limits for minds.
This begs the question: Is the ceiling of engineering prowess available to evolved minds lower than the floor of engineering prowess required for recursive self improvement?
To the best of my knowledge, this question is undecided.
Is there good evidence in one direction or the other?
Superintelligence by way of a controlled Intelligence Explosion seems harder than General Artificial Intelligence by Uploading by a large amount. Like intergalactic travel is harder than interstellar travel by a large amount.
An Intelligence Explosion may permitted under physicalism, yet be prohibited by cost and complexity, like intergalactic travel is permitted under physicalism, yet prohibited by cost and engineering complexity.
Thereby saving minds such as we, around the minimum level of intelligence required for general intelligence, from being displaced by minds that are powerful to the maximum degree that the limits-of-physics allow.
Safety wise initiating an Intelligence Explosion seems around as safe as going to the moon by riding on the heat and pressure of a nuclear chain reaction instead of on the tip of a large chemical rocket.
I still don’t perceive it to be anywhere close to being “obvious” that an intelligence explosion, as opposed to learning and intelligence amplification, is more than a possibility (in the sense that is far from being certain).
It seems like a false dichotomy to me. “Explosion” in this context surely just refers to rapid exponential growth—as seen in an a nuclear explosion. We are seeing that already in functional intelligence, and it is happening through education and augmentation.
Right! I started assuming Intelligence Explosion followed from physicalism, since Luke is a cool guy, and he said so. I think I changed my mind somewhere in the middle of writing the post above. Maybe stop at “Oh! Superintelligence!” instead of also “Oh! Intelligence explosion!”, but Michael Vassar is smarter than me, so I’m probably missing something.
Oh. Crap. I think I reversed my position again.
Why? Well, a sufficiently powerful mind that is given enough time should explode or extinguish.