(Obviously it is somehow feasible to make an AGI, because evolution did it.)
This parenthetical is one of the reasons why I think AGI is likely to come soon.
The example of human evolution provides a strict upper bound on the difficulty of creating (true, lethally dangerous) AGI, and of packing it into a 10 W, 1000 cm3 box.
That doesn’t mean that recreating the method used by evolution (iterative mutation over millions of years at planet scale) is the only way to discover and learn general-purpose reasoning algorithms. Evolution had a lot of time and resources to run, but it is an extremely dumb optimization process that is subject to a bunch of constraints and quirks of biology, which human designers are already free of.
To me, LLMs and other recent AI capabilities breakthroughs are evidence that methods other than planet-scale iterative mutation can get you something, even if it’s still pretty far from AGI. And I think it is likely that capabilities research will continue to lead to scaling and algorithms progress that will get you more and more something. But progress of this kind can’t go on forever—eventually it will hit on human-level (or better) reasoning ability.
The inference I make from observing both the history of human evolution and the spate of recent AI capabilities progress is that human-level intelligence can’t be that special or difficult to create in an absolute sense, and that while evolutionary methods (or something isomorphic to them) at planet scale are sufficientto get to general intelligence, they’re probably not necessary.
Or, put another way:
Finally: I also see a fair number of specific “blockers”, as well as some indications that existing things don’t have properties that would scare me.
I mostly agree with the point about existing systems, but I think there are only so many independent high-difficulty blockers which can “fit” inside the AGI-invention problem, since evolution somehow managed to solve them all through inefficient brute force. LLMs are evidence that at least some of the (perhaps easier) blockers can be solved via methods that are tractable to run on current-day hardware on far shorter timescales than evolution.
“It” here refers to progress from human ingenuity, so I’m hesitant to put any limits whatsoever on what it will produce and how fast, let alone a limit below what evolution has already achieved.
I don’t think this is a good description of evolution.
Hmm, yeah. The thing I am trying to get at is that evolution is very dumb and limited in some ways (in the sense of An Alien God, Evolutions Are Stupid (But Work Anyway)), compared to human designers, but managed to design a general intelligence anyway, given enough time / energy / resources.
By “inefficient”, I mean that human researchers with GPUs can (probably) design and create general intelligence OOM faster than evolution. Humans are likely to accomplish such a feat in decades or centuries at the most, so I think it is justified to call any process which takes millennia or longer inefficient, even if the human-based decades-long design process hasn’t actually succeeded yet.
Ok. This makes sense. And I think about everyone agrees that evolution is very inefficient, in the sense that with some work (but vastly less time than evolution used) humans will be able to figure out how to make a thing that, using much less resources than evolution used, makes an AGI.
I was objecting to “brute force”, not “inefficient”. It’s brute force in some sense, like it’s “just physics” in the sense that you can just set up some particles and then run physics forward and get an AGI. But it also uses a lot of design ideas (stuff in the genome, and some ecological structure). It does a lot of search on a lot of dimensions of design. If you don’t efficient-ify your big evolution, you’re invoking a lot of compute; if you do efficient-ify, you might be cutting off those dimensions of search.
“It” here refers to progress from human ingenuity, so I’m hesitant to put any limits whatsoever on what it will produce and how fast
There’s a contingent fact which is how many people are doing how much great original natural philosophy about intelligence and machine learning. If I thought the influx of people were directed at that, rather than at other stuff, I’d think AGI was coming sooner.
Humans are likely to accomplish such a feat in decades or centuries at the most,
As I said in the post, I agree with this, but I think it requires a bunch of work that hasn’t been done yet, some of it difficult / requires insights.
I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.
Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.
So even if LLMs and other current DL paradigm methods plateau, I think it’s plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.
In terms of responses to arguments in the post: it’s not that there are no blockers, or that there’s just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It’s just that explicit philosophy isn’t the only way of filling in the missing pieces, however large and many they may be.
Don’t you think that once scaling hits the wall (assuming it does) the influx of people will be redirected towards natural philosophy of Intelligence and ML?
Yep! To some extent. That’s what I meant by “It also seems like people are distracted now.”, above. I have a denser probability on AGI in 2037 than on AGI in 2027, for that reason.
Natural philosophy is hard, and somewhat has serial dependencies, and IMO it’s unclear how close we are. (That uncertainty includes “plausibly we’re very very close, just another insight about how to tie things together will open the floodgates”.) Also there’s other stuff for people to do. They can just quiesce into bullshit jobs; they can work on harvesting stuff; they can leave the field; they can work on incremental progress.
This parenthetical is one of the reasons why I think AGI is likely to come soon.
The example of human evolution provides a strict upper bound on the difficulty of creating (true, lethally dangerous) AGI, and of packing it into a 10 W, 1000 cm3 box.
That doesn’t mean that recreating the method used by evolution (iterative mutation over millions of years at planet scale) is the only way to discover and learn general-purpose reasoning algorithms. Evolution had a lot of time and resources to run, but it is an extremely dumb optimization process that is subject to a bunch of constraints and quirks of biology, which human designers are already free of.
To me, LLMs and other recent AI capabilities breakthroughs are evidence that methods other than planet-scale iterative mutation can get you something, even if it’s still pretty far from AGI. And I think it is likely that capabilities research will continue to lead to scaling and algorithms progress that will get you more and more something. But progress of this kind can’t go on forever—eventually it will hit on human-level (or better) reasoning ability.
The inference I make from observing both the history of human evolution and the spate of recent AI capabilities progress is that human-level intelligence can’t be that special or difficult to create in an absolute sense, and that while evolutionary methods (or something isomorphic to them) at planet scale are sufficient to get to general intelligence, they’re probably not necessary.
Or, put another way:
I mostly agree with the point about existing systems, but I think there are only so many independent high-difficulty blockers which can “fit” inside the AGI-invention problem, since evolution somehow managed to solve them all through inefficient brute force. LLMs are evidence that at least some of the (perhaps easier) blockers can be solved via methods that are tractable to run on current-day hardware on far shorter timescales than evolution.
(Glib answers in place of no answers)
Or it’s limited to a submanifold of generators.
I don’t think this is a good description of evolution.
“It” here refers to progress from human ingenuity, so I’m hesitant to put any limits whatsoever on what it will produce and how fast, let alone a limit below what evolution has already achieved.
Hmm, yeah. The thing I am trying to get at is that evolution is very dumb and limited in some ways (in the sense of An Alien God, Evolutions Are Stupid (But Work Anyway)), compared to human designers, but managed to design a general intelligence anyway, given enough time / energy / resources.
By “inefficient”, I mean that human researchers with GPUs can (probably) design and create general intelligence OOM faster than evolution. Humans are likely to accomplish such a feat in decades or centuries at the most, so I think it is justified to call any process which takes millennia or longer inefficient, even if the human-based decades-long design process hasn’t actually succeeded yet.
Ok. This makes sense. And I think about everyone agrees that evolution is very inefficient, in the sense that with some work (but vastly less time than evolution used) humans will be able to figure out how to make a thing that, using much less resources than evolution used, makes an AGI.
I was objecting to “brute force”, not “inefficient”. It’s brute force in some sense, like it’s “just physics” in the sense that you can just set up some particles and then run physics forward and get an AGI. But it also uses a lot of design ideas (stuff in the genome, and some ecological structure). It does a lot of search on a lot of dimensions of design. If you don’t efficient-ify your big evolution, you’re invoking a lot of compute; if you do efficient-ify, you might be cutting off those dimensions of search.
There’s a contingent fact which is how many people are doing how much great original natural philosophy about intelligence and machine learning. If I thought the influx of people were directed at that, rather than at other stuff, I’d think AGI was coming sooner.
As I said in the post, I agree with this, but I think it requires a bunch of work that hasn’t been done yet, some of it difficult / requires insights.
I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.
Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.
So even if LLMs and other current DL paradigm methods plateau, I think it’s plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.
In terms of responses to arguments in the post: it’s not that there are no blockers, or that there’s just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It’s just that explicit philosophy isn’t the only way of filling in the missing pieces, however large and many they may be.
Related—“There are always many ways through the garden of forking paths, and something needs only one path to happen.”
Don’t you think that once scaling hits the wall (assuming it does) the influx of people will be redirected towards natural philosophy of Intelligence and ML?
Yep! To some extent. That’s what I meant by “It also seems like people are distracted now.”, above. I have a denser probability on AGI in 2037 than on AGI in 2027, for that reason.
Natural philosophy is hard, and somewhat has serial dependencies, and IMO it’s unclear how close we are. (That uncertainty includes “plausibly we’re very very close, just another insight about how to tie things together will open the floodgates”.) Also there’s other stuff for people to do. They can just quiesce into bullshit jobs; they can work on harvesting stuff; they can leave the field; they can work on incremental progress.