Thanks for clarifying. Yeah, I agree the argument is mathematically correct, but it kinda doesn’t seem to apply to historic cases of intelligence increase that we have:
Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
In human history, our “effective intelligence”—capability of making decisions with the use of man-made tools—grows at an increasing rate, not decreasing
I’m still thinking about how best to reconcile this with the asymptotics. I think the other comments are right in that we’re still at the stage where improving the constants is very viable.
Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
Oh man am I not convinced of this at all. Human intelligence seems to me to be only the result of 1. scaling up primate brains and 2. accumulating knowledge in the form of language, which relied on 3. humans and hominids in general being exceptional at synchronized behavior and collective action (eg, “charge!!!”) - modern primates besides humans are still exceptionally smart per synapse among the animal kingdom.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.
My impression is that the human brain is a scaled up primate brain.
As for humanity’s effective capabilities increasing with time:
Language allowed accumulation of knowledge across generations, plus cultural evolution
Population growth has been (super)exponential over the history of humanity
Larger populations afforded specialisation/division of labour, trade, economics, industry, etc.
Alternatively, our available resources have grown at a superexponential rate.
The issue is takeoff being fast relative to the reaction time of civilisation. The AI would need to grow its invested resources much faster than civilisation has been to date.
But resource investment seems primed to slow down if anything.
Resource accumulation certainly can’t grow exponentially indefinitely and I agree that RSI can’t improve exponentially forever either, but it doesn’t need to for AI to take over.
An AI doesn’t have to get far beyond human-level intelligence to control the future. If there’s sufficient algorithmic overhang, current resources might even be enough. FOOM would certainly be easier if no new hardware were necessary. This would look less like an explosion and more like a quantum leap followed by slower growth as physical reality constrains rapid progress.
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.
Thanks for clarifying. Yeah, I agree the argument is mathematically correct, but it kinda doesn’t seem to apply to historic cases of intelligence increase that we have:
Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
In human history, our “effective intelligence”—capability of making decisions with the use of man-made tools—grows at an increasing rate, not decreasing
I’m still thinking about how best to reconcile this with the asymptotics. I think the other comments are right in that we’re still at the stage where improving the constants is very viable.
Oh man am I not convinced of this at all. Human intelligence seems to me to be only the result of 1. scaling up primate brains and 2. accumulating knowledge in the form of language, which relied on 3. humans and hominids in general being exceptional at synchronized behavior and collective action (eg, “charge!!!”) - modern primates besides humans are still exceptionally smart per synapse among the animal kingdom.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.
My impression is that the human brain is a scaled up primate brain.
As for humanity’s effective capabilities increasing with time:
Language allowed accumulation of knowledge across generations, plus cultural evolution
Population growth has been (super)exponential over the history of humanity
Larger populations afforded specialisation/division of labour, trade, economics, industry, etc.
Alternatively, our available resources have grown at a superexponential rate.
The issue is takeoff being fast relative to the reaction time of civilisation. The AI would need to grow its invested resources much faster than civilisation has been to date.
But resource investment seems primed to slow down if anything.
Resource accumulation certainly can’t grow exponentially indefinitely and I agree that RSI can’t improve exponentially forever either, but it doesn’t need to for AI to take over.
An AI doesn’t have to get far beyond human-level intelligence to control the future. If there’s sufficient algorithmic overhang, current resources might even be enough. FOOM would certainly be easier if no new hardware were necessary. This would look less like an explosion and more like a quantum leap followed by slower growth as physical reality constrains rapid progress.
Explain the inside view of “algorithmic overhang”?
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.