In some sense the Foom already occurred—it was us. But it wasn’t the result of any new feature in the brain—our brains are just standard primate brains, scaled up a bit[14] and trained for longer. Human intelligence is the result of a complex one time meta-systems transition: brains networking together and organizing into families, tribes, nations, and civilizations through language. … That transition only happens once—there are not ever more and more levels of universality or linguistic programmability. AGI does not FOOM again in the same way.
Although I think agree the ‘meta-systems transition’ is a super important shift, which can lead us to overestimate the level of difference between us and previous apes, it also doesn’t seem like it was just a one time shift. We had fire, stone tools and probably language for literally millions of years before the Neolithic revolution. For the industrial revolution it seems that a few bits of cognitive technology (not even genes, just memes!) in renaissance Europe sent the world suddenly off on a whole new exponential.
The lesson, for me, is that the capability level of the meta-system/technology frontier is a very sensitive function of the kind of intelligences which are operating, and we therefore shouldn’t feel at all confident generalising out of distribution. Then, once we start to incorporate feedback loops from the technology frontier back into the underlying intelligences which are developing that technology, all modelling goes out the window.
From a technical modelling perspective, I understand that the Roodman model that you reference below (hard singularity at median 2047) has both hyperbolic growth and random shocks, and so even within that model, we shouldn’t be too surprised to see a sudden shift in gears and a much sooner singularity, even without accounting for RSI taking us somehow off-script.
To expand on the idea of meta-systems and their capability: Similarly to discussing brain efficiency, we could ask about the efficiency of our civilization (in the sense of being able to point its capability to a unified goal), among all possible ways of organising civilisations. If our civilisation is very inefficient, AI could figure out a better design and foom that way.
Primarily, I think the question of our civilization’s efficiency is unclear. My intuition is that our civilization is quite inefficient, with the following points serving as weak evidence:
Civilization hasn’t been around that long, and has therefore not been optimised much.
The point (1) gets even more pronounced as you go from “designs for cooperation among a small group” to “designs for cooperation among milions”, or even billions. (Because fewer of these were running in parallel, and for a shorter time.)
The fact that civilization runs on humans, who are selfish etc, might severely limit the space of designs that have been tried.
As a lower bound, it seems that something like Yudkowsky’s ideas about dath ilan might work. (Not to be mistaken with “we can get there from here”, “works for humans”, or “none of Yudkowsky’s ideas have holes in them”.)
None of this contradicts your arguments, but it adds uncertainty and should make us more cautios about AI. (Not that I interpret the post as advocating against caution.)
Although I think agree the ‘meta-systems transition’ is a super important shift, which can lead us to overestimate the level of difference between us and previous apes, it also doesn’t seem like it was just a one time shift.
Yes in the sense that if you zoom in you’ll see language starting with simplistic low bit rate communication and steadily improving, followed by writing for external memory, printing press, telecommunication, computers, etc etc. Noosphere to technosphere.
But those improvements are not happening in human brains, they are cybernetic externalized.
Yeah I agree it’s not in human brains, not really disagreeing with the bulk of the argument re brains but just about whether it does much to reduce foom %. Maybe it constrains the ultra fast scenarios a bit but not much more imo.
“Small” (ie << 6 OOM) jump in underlying brain function from current paradigm AI → Gigantic shift in tech frontier rate of change → Exotic tech becomes quickly reachable → YudFoom
The key thing I disagree with is:
Although I think agree the ‘meta-systems transition’ is a super important shift, which can lead us to overestimate the level of difference between us and previous apes, it also doesn’t seem like it was just a one time shift. We had fire, stone tools and probably language for literally millions of years before the Neolithic revolution. For the industrial revolution it seems that a few bits of cognitive technology (not even genes, just memes!) in renaissance Europe sent the world suddenly off on a whole new exponential.
The lesson, for me, is that the capability level of the meta-system/technology frontier is a very sensitive function of the kind of intelligences which are operating, and we therefore shouldn’t feel at all confident generalising out of distribution. Then, once we start to incorporate feedback loops from the technology frontier back into the underlying intelligences which are developing that technology, all modelling goes out the window.
From a technical modelling perspective, I understand that the Roodman model that you reference below (hard singularity at median 2047) has both hyperbolic growth and random shocks, and so even within that model, we shouldn’t be too surprised to see a sudden shift in gears and a much sooner singularity, even without accounting for RSI taking us somehow off-script.
To expand on the idea of meta-systems and their capability: Similarly to discussing brain efficiency, we could ask about the efficiency of our civilization (in the sense of being able to point its capability to a unified goal), among all possible ways of organising civilisations. If our civilisation is very inefficient, AI could figure out a better design and foom that way.
Primarily, I think the question of our civilization’s efficiency is unclear. My intuition is that our civilization is quite inefficient, with the following points serving as weak evidence:
Civilization hasn’t been around that long, and has therefore not been optimised much.
The point (1) gets even more pronounced as you go from “designs for cooperation among a small group” to “designs for cooperation among milions”, or even billions. (Because fewer of these were running in parallel, and for a shorter time.)
The fact that civilization runs on humans, who are selfish etc, might severely limit the space of designs that have been tried.
As a lower bound, it seems that something like Yudkowsky’s ideas about dath ilan might work. (Not to be mistaken with “we can get there from here”, “works for humans”, or “none of Yudkowsky’s ideas have holes in them”.)
None of this contradicts your arguments, but it adds uncertainty and should make us more cautios about AI. (Not that I interpret the post as advocating against caution.)
Yes in the sense that if you zoom in you’ll see language starting with simplistic low bit rate communication and steadily improving, followed by writing for external memory, printing press, telecommunication, computers, etc etc. Noosphere to technosphere.
But those improvements are not happening in human brains, they are cybernetic externalized.
Yeah I agree it’s not in human brains, not really disagreeing with the bulk of the argument re brains but just about whether it does much to reduce foom %. Maybe it constrains the ultra fast scenarios a bit but not much more imo.
“Small” (ie << 6 OOM) jump in underlying brain function from current paradigm AI → Gigantic shift in tech frontier rate of change → Exotic tech becomes quickly reachable → YudFoom