@robbb, I thought Carl’s reply near the top here was a pretty strong explanation for why non-discontinuity in other domains is relevant (which you were asking about). I also thought this was a good point:
So the ‘we find a secret sauce algorithm that causes a massive unprecedented performance jump, without crappier predecessors’ is a ‘separate, additional miracle’ at exactly the same time as the intelligence explosion is getting going.
I think I don’t understand Carl’s “separate, additional miracle” argument. From my perspective, the basic AGI argument is:
“General intelligence” makes sense as a discrete thing you can invent at a particular time. We can think of it as: performing long chains of reasoning to steer messy physical environments into specific complicated states, in the way that humans do science and technology to reshape their environment to match human goals. Another way of thinking about it is ‘AlphaGo, but the game environment is now the physical world rather than a Go board’.
Humans (our only direct data point) match this model: we can do an enormous variety of things that were completely absent from our environment of evolutionary adaptedness, and when we acquired this suite of abilities we ‘instantly’ (on a geologic timescale) had a massive discontinuous impact on the world.
So we should expect AI, at some point, to go from ‘can’t do sophisticated reasoning about messy physical environments in general’ to ‘can do that kind of reasoning’, at which point you suddenly have an ‘AlphaGo of the entire physical world’. Which implies all the standard advantages of digital minds over human minds, such as:
We can immediately scale AlphaWorld with more hardware, rather than needing to wait for an organism to biologically reproduce.
We can rapidly iterate on designs and make deliberate engineering choices, rather than waiting to stumble on an evolutionarily fit point mutation.
We can optimize the system directly for things like scientific reasoning, whereas human brains can do science only as a side-effect of our EAA capacities.
When you go from not-having an invention to having one, there’s always a discontinuous capabilities jump. Usually, however, the jump doesn’t have much immediate impact on the world as a whole, because the thing you’re inventing isn’t a super-high-impact sort of thing. When you go from 0 to 1 on building Microsoft Word, you have a discontinuous Microsoft-Word-sized impact on the world. When you go from 0 to 1 on building AGI, you have a discontinuous AGI-sized impact on the world.
Thinking in the abstract about ‘how useful would it be to be able to automate all reasoning about the physical world / all science / all technology?’ is totally sufficient to make it clear why this impact would probably be enormous; though if we have doubts about our ability to abstractly reason to this conclusion, we can look at the human case too.
In that context, I find the “separate, additional miracle” argument weird. There’s no additional miracle where we assume both AGI and intelligence explosion as axioms. Rather, AGI implies intelligence explosion because the ‘be good at reasoning about physical environments in general, constructing long chains of reasoning, strategically moving between different levels of abstraction, organizing your thoughts in a more laserlike way, doing science and technology’ thing implies being able to do AI research, for the same reason humans are able to do AI research. (And once AI can do AI research, it’s trivial to see why this would accelerate AI research, and why this acceleration could feed on itself until it runs out of things to improve.)
If you believe intelligence explosion is a thing but don’t think AGI is a thing, then sure, I can put myself in a mindset where it’s weird to imagine two different world-changing events happening at around the same time (‘I’ve already bought into intelligence explosion; now you want me to also buy into this crazy new thing that’s supposed to happen at almost the exact same time?!’).
But this reaction seems to require zooming out to the level of abstraction ‘these are two huge world-impacting things; two huge world-impacting things shouldn’t happen at the same time!‘. The entire idea of AGI is ‘multiple world-impacting sorts of things happen simultaneously’; otherwise we wouldn’t call it ‘general’, and wouldn’t talk about getting the capacity to do particle physics and pharmacology and electrical engineering simultaneously.
The fact that, e.g. AIs are mastering so much math and language while still wielding vastly infrahuman brain-equivalents, and crossing human competence in many domains (where there was ongoing effort) over decades is significant evidence for something smoother than the development of modern humans and their culture.
I agree with this as a directional update — it’s nontrivial evidence for some combination of (a) ‘we’ve already figured out key parts of reasoning-about-the-physical-world, and/or key precursors’ and (b) ‘you can do a lot of impressive world-impacting stuff without having full general intelligence’.
But I don’t in fact believe on this basis that we already have baby AGIs. And if the argument isn’t ‘we already have baby AGIs’ but rather ‘the idea of “AGI” is wrong, we’re going to (e.g.) gradually get one science after another rather than getting all the sciences at once’, then that seems like directionally the wrong update to make from Atari, AlphaZero, GPT-3, etc. E.g., we don’t live in a Hanson-esque world where AIs produce most of the scientific progress in biochemistry but the field has tried and failed for years to make serious AI-mediated progress on aerospace engineering.
Thanks for the in-depth response! I think I have a better idea now where you’re coming from. A couple follow-up questions:
But I don’t in fact believe on this basis that we already have baby AGIs. And if the argument isn’t ‘we already have baby AGIs’ but rather ‘the idea of “AGI” is wrong, we’re going to (e.g.) gradually get one science after another rather than getting all the sciences at once’, then that seems like directionally the wrong update to make from Atari, AlphaZero, GPT-3, etc
Do you think that human generality of thought requires a unique algorithm and/or brain structure that’s not present in chimps? Rather than our brains just being scaled up chimp brains that then cross a threshold of generality (analogous to how GPT-3 had much more general capabilities than GPT-2)?
Would it not be reasonable to think of chimp brains as like ‘baby’ human brains?
I think Carl’s comment about an ‘additional miracle’ makes sense if you think that the most direct path to AGI is roughly via scaling up today’s systems. In that case, it would seem to be quite the coincidence if some additional general-thought technology was invented right around the same time that ML systems were scaling up enough to have general capabilities.
Does the ‘additional miracle’ comment make sense if you assume that frame – that AGI will come from something like scaled up versions of current ML systems?
Do you think that human generality of thought requires a unique algorithm and/or brain structure that’s not present in chimps? Rather than our brains just being scaled up chimp brains that then cross a threshold of generality (analogous to how GPT-3 had much more general capabilities than GPT-2)?
I think human brains aren’t just bigger chimp brains, yeah.
(Though it’s not obvious to me that this is a crux. If human brains were just scaled up chimp-brains, it wouldn’t necessarily be the case that chimps are scaled-up ‘thing-that-works-like-GPT’ brains, or scaled-up pelycosaur brains.)
Does the ‘additional miracle’ comment make sense if you assume that frame – that AGI will come from something like scaled up versions of current ML systems?
If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold (and probably other leaps too). Continuous tech improvement doesn’t imply continuous cognitive output to arbitrarily high levels. (Nor does continuous cognitive output imply continuous real-world impact to arbitrarily high levels!)
If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold
Do none of A) GPT-3 producing continuations about physical environments, or B) MuZero learning a model of the environment, or even C) a Tesla driving on Autopilot, count?
It seems to me that you could consider these to be systems that reason about the messy physical world poorly, but definitely ‘at all’.
Is there maybe some kind of self-directedness or agenty-ness that you’re looking for that these systems don’t have?
(EDIT: I’m digging in on this in part because it seems related to a potential crux that Ajeya and Nate noted here.)
Relative to what I mean by ‘reasoning about messy physical environments at all’, MuZero and Tesla Autopilot don’t count. I could see an argument for GPT-3 counting, but I don’t think it’s in fact doing the thing.
Btw, I just wrote up my current thoughts on the path from here to AGI, inspired in part by this discussion. I’d be curious to know where others disagree with my model.
@robbb, I thought Carl’s reply near the top here was a pretty strong explanation for why non-discontinuity in other domains is relevant (which you were asking about). I also thought this was a good point:
I’m curious what you think of the points he made.
I think I don’t understand Carl’s “separate, additional miracle” argument. From my perspective, the basic AGI argument is:
“General intelligence” makes sense as a discrete thing you can invent at a particular time. We can think of it as: performing long chains of reasoning to steer messy physical environments into specific complicated states, in the way that humans do science and technology to reshape their environment to match human goals. Another way of thinking about it is ‘AlphaGo, but the game environment is now the physical world rather than a Go board’.
Humans (our only direct data point) match this model: we can do an enormous variety of things that were completely absent from our environment of evolutionary adaptedness, and when we acquired this suite of abilities we ‘instantly’ (on a geologic timescale) had a massive discontinuous impact on the world.
So we should expect AI, at some point, to go from ‘can’t do sophisticated reasoning about messy physical environments in general’ to ‘can do that kind of reasoning’, at which point you suddenly have an ‘AlphaGo of the entire physical world’. Which implies all the standard advantages of digital minds over human minds, such as:
We can immediately scale AlphaWorld with more hardware, rather than needing to wait for an organism to biologically reproduce.
We can rapidly iterate on designs and make deliberate engineering choices, rather than waiting to stumble on an evolutionarily fit point mutation.
We can optimize the system directly for things like scientific reasoning, whereas human brains can do science only as a side-effect of our EAA capacities.
When you go from not-having an invention to having one, there’s always a discontinuous capabilities jump. Usually, however, the jump doesn’t have much immediate impact on the world as a whole, because the thing you’re inventing isn’t a super-high-impact sort of thing. When you go from 0 to 1 on building Microsoft Word, you have a discontinuous Microsoft-Word-sized impact on the world. When you go from 0 to 1 on building AGI, you have a discontinuous AGI-sized impact on the world.
Thinking in the abstract about ‘how useful would it be to be able to automate all reasoning about the physical world / all science / all technology?’ is totally sufficient to make it clear why this impact would probably be enormous; though if we have doubts about our ability to abstractly reason to this conclusion, we can look at the human case too.
In that context, I find the “separate, additional miracle” argument weird. There’s no additional miracle where we assume both AGI and intelligence explosion as axioms. Rather, AGI implies intelligence explosion because the ‘be good at reasoning about physical environments in general, constructing long chains of reasoning, strategically moving between different levels of abstraction, organizing your thoughts in a more laserlike way, doing science and technology’ thing implies being able to do AI research, for the same reason humans are able to do AI research. (And once AI can do AI research, it’s trivial to see why this would accelerate AI research, and why this acceleration could feed on itself until it runs out of things to improve.)
If you believe intelligence explosion is a thing but don’t think AGI is a thing, then sure, I can put myself in a mindset where it’s weird to imagine two different world-changing events happening at around the same time (‘I’ve already bought into intelligence explosion; now you want me to also buy into this crazy new thing that’s supposed to happen at almost the exact same time?!’).
But this reaction seems to require zooming out to the level of abstraction ‘these are two huge world-impacting things; two huge world-impacting things shouldn’t happen at the same time!‘. The entire idea of AGI is ‘multiple world-impacting sorts of things happen simultaneously’; otherwise we wouldn’t call it ‘general’, and wouldn’t talk about getting the capacity to do particle physics and pharmacology and electrical engineering simultaneously.
I agree with this as a directional update — it’s nontrivial evidence for some combination of (a) ‘we’ve already figured out key parts of reasoning-about-the-physical-world, and/or key precursors’ and (b) ‘you can do a lot of impressive world-impacting stuff without having full general intelligence’.
But I don’t in fact believe on this basis that we already have baby AGIs. And if the argument isn’t ‘we already have baby AGIs’ but rather ‘the idea of “AGI” is wrong, we’re going to (e.g.) gradually get one science after another rather than getting all the sciences at once’, then that seems like directionally the wrong update to make from Atari, AlphaZero, GPT-3, etc. E.g., we don’t live in a Hanson-esque world where AIs produce most of the scientific progress in biochemistry but the field has tried and failed for years to make serious AI-mediated progress on aerospace engineering.
Thanks for the in-depth response! I think I have a better idea now where you’re coming from. A couple follow-up questions:
Do you think that human generality of thought requires a unique algorithm and/or brain structure that’s not present in chimps? Rather than our brains just being scaled up chimp brains that then cross a threshold of generality (analogous to how GPT-3 had much more general capabilities than GPT-2)?
Would it not be reasonable to think of chimp brains as like ‘baby’ human brains?
I think Carl’s comment about an ‘additional miracle’ makes sense if you think that the most direct path to AGI is roughly via scaling up today’s systems. In that case, it would seem to be quite the coincidence if some additional general-thought technology was invented right around the same time that ML systems were scaling up enough to have general capabilities.
Does the ‘additional miracle’ comment make sense if you assume that frame – that AGI will come from something like scaled up versions of current ML systems?
I think human brains aren’t just bigger chimp brains, yeah.
(Though it’s not obvious to me that this is a crux. If human brains were just scaled up chimp-brains, it wouldn’t necessarily be the case that chimps are scaled-up ‘thing-that-works-like-GPT’ brains, or scaled-up pelycosaur brains.)
If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold (and probably other leaps too). Continuous tech improvement doesn’t imply continuous cognitive output to arbitrarily high levels. (Nor does continuous cognitive output imply continuous real-world impact to arbitrarily high levels!)
Do none of A) GPT-3 producing continuations about physical environments, or B) MuZero learning a model of the environment, or even C) a Tesla driving on Autopilot, count?
It seems to me that you could consider these to be systems that reason about the messy physical world poorly, but definitely ‘at all’.
Is there maybe some kind of self-directedness or agenty-ness that you’re looking for that these systems don’t have?
(EDIT: I’m digging in on this in part because it seems related to a potential crux that Ajeya and Nate noted here.)
Relative to what I mean by ‘reasoning about messy physical environments at all’, MuZero and Tesla Autopilot don’t count. I could see an argument for GPT-3 counting, but I don’t think it’s in fact doing the thing.
Gotcha, thanks for the follow-up.
Btw, I just wrote up my current thoughts on the path from here to AGI, inspired in part by this discussion. I’d be curious to know where others disagree with my model.
This seems like a fairly important crux. I see it as something that has been developed via many steps that are mostly small.