My impression is that the human brain is a scaled up primate brain.
As for humanity’s effective capabilities increasing with time:
Language allowed accumulation of knowledge across generations, plus cultural evolution
Population growth has been (super)exponential over the history of humanity
Larger populations afforded specialisation/division of labour, trade, economics, industry, etc.
Alternatively, our available resources have grown at a superexponential rate.
The issue is takeoff being fast relative to the reaction time of civilisation. The AI would need to grow its invested resources much faster than civilisation has been to date.
But resource investment seems primed to slow down if anything.
Resource accumulation certainly can’t grow exponentially indefinitely and I agree that RSI can’t improve exponentially forever either, but it doesn’t need to for AI to take over.
An AI doesn’t have to get far beyond human-level intelligence to control the future. If there’s sufficient algorithmic overhang, current resources might even be enough. FOOM would certainly be easier if no new hardware were necessary. This would look less like an explosion and more like a quantum leap followed by slower growth as physical reality constrains rapid progress.
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.
My impression is that the human brain is a scaled up primate brain.
As for humanity’s effective capabilities increasing with time:
Language allowed accumulation of knowledge across generations, plus cultural evolution
Population growth has been (super)exponential over the history of humanity
Larger populations afforded specialisation/division of labour, trade, economics, industry, etc.
Alternatively, our available resources have grown at a superexponential rate.
The issue is takeoff being fast relative to the reaction time of civilisation. The AI would need to grow its invested resources much faster than civilisation has been to date.
But resource investment seems primed to slow down if anything.
Resource accumulation certainly can’t grow exponentially indefinitely and I agree that RSI can’t improve exponentially forever either, but it doesn’t need to for AI to take over.
An AI doesn’t have to get far beyond human-level intelligence to control the future. If there’s sufficient algorithmic overhang, current resources might even be enough. FOOM would certainly be easier if no new hardware were necessary. This would look less like an explosion and more like a quantum leap followed by slower growth as physical reality constrains rapid progress.
Explain the inside view of “algorithmic overhang”?
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.