This is a solid argument inasmuch as we define RSI to be about self-modifying its own weights/other-inscrutable-reasoning-atoms. That does seem to be quite hard given our current understanding.
But there are tons of opportunities for an agent to improve its own reasoning capacity otherwise. At a very basic level, the agent can do at least two other things:
Make itself faster and more energy efficient—in the DL paradigm, techniques like quantization, distillation and pruning seem to be very effective when used by humans and keep improving, so it’s likely an AGI would improve them further.
Invent computational tools: wrt
Most problems in computer science have superlinear time complexity
on one hand sure, improving this is (likely) impossible in the limit because of fundamental complexity properties. On the other hand, the agent can still become vastly smarter than humans. A particular example: the human mind, without any assistance, is very bad at solving 3SAT. But we’ve invented computers, and then constraint solvers, and now are able to solve 3SAT much much faster, even though 3SAT is (likely) exponentially hard. So the RSI argument here is, the smarter (or faster) the model is, the more special-purpose tools it can create to efficiently solve specific problems and thus upgrade its reasoning ability. Not to infinity, but likely far beyond humans.
To be clear, the complexity theory argument is against fast takeoff, not an argument that intelligence caps at some level relative to humans.
Analogy log(n) approaches infinity, but it does so much slower than 2n.
I.e. the sublinear asymptotics would prevent AI from progressing very quickly to a vastly superhuman level (unless the AI is able to grow its available resources sufficiently quickly to dominate the poor asymptotics.
Alternatively, each order of magnitude increase in compute buys (significantly) less intelligence; thus progress from human level to a vastly superhuman level just can’t be very fast without a qualitative jump in the growth curves for compute investment.
Thanks for clarifying. Yeah, I agree the argument is mathematically correct, but it kinda doesn’t seem to apply to historic cases of intelligence increase that we have:
Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
In human history, our “effective intelligence”—capability of making decisions with the use of man-made tools—grows at an increasing rate, not decreasing
I’m still thinking about how best to reconcile this with the asymptotics. I think the other comments are right in that we’re still at the stage where improving the constants is very viable.
Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
Oh man am I not convinced of this at all. Human intelligence seems to me to be only the result of 1. scaling up primate brains and 2. accumulating knowledge in the form of language, which relied on 3. humans and hominids in general being exceptional at synchronized behavior and collective action (eg, “charge!!!”) - modern primates besides humans are still exceptionally smart per synapse among the animal kingdom.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.
My impression is that the human brain is a scaled up primate brain.
As for humanity’s effective capabilities increasing with time:
Language allowed accumulation of knowledge across generations, plus cultural evolution
Population growth has been (super)exponential over the history of humanity
Larger populations afforded specialisation/division of labour, trade, economics, industry, etc.
Alternatively, our available resources have grown at a superexponential rate.
The issue is takeoff being fast relative to the reaction time of civilisation. The AI would need to grow its invested resources much faster than civilisation has been to date.
But resource investment seems primed to slow down if anything.
Resource accumulation certainly can’t grow exponentially indefinitely and I agree that RSI can’t improve exponentially forever either, but it doesn’t need to for AI to take over.
An AI doesn’t have to get far beyond human-level intelligence to control the future. If there’s sufficient algorithmic overhang, current resources might even be enough. FOOM would certainly be easier if no new hardware were necessary. This would look less like an explosion and more like a quantum leap followed by slower growth as physical reality constrains rapid progress.
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.
This is a solid argument inasmuch as we define RSI to be about self-modifying its own weights/other-inscrutable-reasoning-atoms. That does seem to be quite hard given our current understanding.
But there are tons of opportunities for an agent to improve its own reasoning capacity otherwise. At a very basic level, the agent can do at least two other things:
Make itself faster and more energy efficient—in the DL paradigm, techniques like quantization, distillation and pruning seem to be very effective when used by humans and keep improving, so it’s likely an AGI would improve them further.
Invent computational tools: wrt
on one hand sure, improving this is (likely) impossible in the limit because of fundamental complexity properties. On the other hand, the agent can still become vastly smarter than humans. A particular example: the human mind, without any assistance, is very bad at solving 3SAT. But we’ve invented computers, and then constraint solvers, and now are able to solve 3SAT much much faster, even though 3SAT is (likely) exponentially hard. So the RSI argument here is, the smarter (or faster) the model is, the more special-purpose tools it can create to efficiently solve specific problems and thus upgrade its reasoning ability. Not to infinity, but likely far beyond humans.
To be clear, the complexity theory argument is against fast takeoff, not an argument that intelligence caps at some level relative to humans.
Analogy log(n) approaches infinity, but it does so much slower than 2n.
I.e. the sublinear asymptotics would prevent AI from progressing very quickly to a vastly superhuman level (unless the AI is able to grow its available resources sufficiently quickly to dominate the poor asymptotics.
Alternatively, each order of magnitude increase in compute buys (significantly) less intelligence; thus progress from human level to a vastly superhuman level just can’t be very fast without a qualitative jump in the growth curves for compute investment.
Thanks for clarifying. Yeah, I agree the argument is mathematically correct, but it kinda doesn’t seem to apply to historic cases of intelligence increase that we have:
Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
In human history, our “effective intelligence”—capability of making decisions with the use of man-made tools—grows at an increasing rate, not decreasing
I’m still thinking about how best to reconcile this with the asymptotics. I think the other comments are right in that we’re still at the stage where improving the constants is very viable.
Oh man am I not convinced of this at all. Human intelligence seems to me to be only the result of 1. scaling up primate brains and 2. accumulating knowledge in the form of language, which relied on 3. humans and hominids in general being exceptional at synchronized behavior and collective action (eg, “charge!!!”) - modern primates besides humans are still exceptionally smart per synapse among the animal kingdom.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.
My impression is that the human brain is a scaled up primate brain.
As for humanity’s effective capabilities increasing with time:
Language allowed accumulation of knowledge across generations, plus cultural evolution
Population growth has been (super)exponential over the history of humanity
Larger populations afforded specialisation/division of labour, trade, economics, industry, etc.
Alternatively, our available resources have grown at a superexponential rate.
The issue is takeoff being fast relative to the reaction time of civilisation. The AI would need to grow its invested resources much faster than civilisation has been to date.
But resource investment seems primed to slow down if anything.
Resource accumulation certainly can’t grow exponentially indefinitely and I agree that RSI can’t improve exponentially forever either, but it doesn’t need to for AI to take over.
An AI doesn’t have to get far beyond human-level intelligence to control the future. If there’s sufficient algorithmic overhang, current resources might even be enough. FOOM would certainly be easier if no new hardware were necessary. This would look less like an explosion and more like a quantum leap followed by slower growth as physical reality constrains rapid progress.
Explain the inside view of “algorithmic overhang”?
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.