Do you think that human generality of thought requires a unique algorithm and/or brain structure that’s not present in chimps? Rather than our brains just being scaled up chimp brains that then cross a threshold of generality (analogous to how GPT-3 had much more general capabilities than GPT-2)?
I think human brains aren’t just bigger chimp brains, yeah.
(Though it’s not obvious to me that this is a crux. If human brains were just scaled up chimp-brains, it wouldn’t necessarily be the case that chimps are scaled-up ‘thing-that-works-like-GPT’ brains, or scaled-up pelycosaur brains.)
Does the ‘additional miracle’ comment make sense if you assume that frame – that AGI will come from something like scaled up versions of current ML systems?
If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold (and probably other leaps too). Continuous tech improvement doesn’t imply continuous cognitive output to arbitrarily high levels. (Nor does continuous cognitive output imply continuous real-world impact to arbitrarily high levels!)
If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold
Do none of A) GPT-3 producing continuations about physical environments, or B) MuZero learning a model of the environment, or even C) a Tesla driving on Autopilot, count?
It seems to me that you could consider these to be systems that reason about the messy physical world poorly, but definitely ‘at all’.
Is there maybe some kind of self-directedness or agenty-ness that you’re looking for that these systems don’t have?
(EDIT: I’m digging in on this in part because it seems related to a potential crux that Ajeya and Nate noted here.)
Relative to what I mean by ‘reasoning about messy physical environments at all’, MuZero and Tesla Autopilot don’t count. I could see an argument for GPT-3 counting, but I don’t think it’s in fact doing the thing.
Btw, I just wrote up my current thoughts on the path from here to AGI, inspired in part by this discussion. I’d be curious to know where others disagree with my model.
I think human brains aren’t just bigger chimp brains, yeah.
(Though it’s not obvious to me that this is a crux. If human brains were just scaled up chimp-brains, it wouldn’t necessarily be the case that chimps are scaled-up ‘thing-that-works-like-GPT’ brains, or scaled-up pelycosaur brains.)
If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold (and probably other leaps too). Continuous tech improvement doesn’t imply continuous cognitive output to arbitrarily high levels. (Nor does continuous cognitive output imply continuous real-world impact to arbitrarily high levels!)
Do none of A) GPT-3 producing continuations about physical environments, or B) MuZero learning a model of the environment, or even C) a Tesla driving on Autopilot, count?
It seems to me that you could consider these to be systems that reason about the messy physical world poorly, but definitely ‘at all’.
Is there maybe some kind of self-directedness or agenty-ness that you’re looking for that these systems don’t have?
(EDIT: I’m digging in on this in part because it seems related to a potential crux that Ajeya and Nate noted here.)
Relative to what I mean by ‘reasoning about messy physical environments at all’, MuZero and Tesla Autopilot don’t count. I could see an argument for GPT-3 counting, but I don’t think it’s in fact doing the thing.
Gotcha, thanks for the follow-up.
Btw, I just wrote up my current thoughts on the path from here to AGI, inspired in part by this discussion. I’d be curious to know where others disagree with my model.