If scaling up something like GPT-3 got you to AGI, I’d still expect discontinuous leaps as the tech reached the ‘can reason about messy physical environments at all’ threshold
Do none of A) GPT-3 producing continuations about physical environments, or B) MuZero learning a model of the environment, or even C) a Tesla driving on Autopilot, count?
It seems to me that you could consider these to be systems that reason about the messy physical world poorly, but definitely ‘at all’.
Is there maybe some kind of self-directedness or agenty-ness that you’re looking for that these systems don’t have?
(EDIT: I’m digging in on this in part because it seems related to a potential crux that Ajeya and Nate noted here.)
Relative to what I mean by ‘reasoning about messy physical environments at all’, MuZero and Tesla Autopilot don’t count. I could see an argument for GPT-3 counting, but I don’t think it’s in fact doing the thing.
Btw, I just wrote up my current thoughts on the path from here to AGI, inspired in part by this discussion. I’d be curious to know where others disagree with my model.
Do none of A) GPT-3 producing continuations about physical environments, or B) MuZero learning a model of the environment, or even C) a Tesla driving on Autopilot, count?
It seems to me that you could consider these to be systems that reason about the messy physical world poorly, but definitely ‘at all’.
Is there maybe some kind of self-directedness or agenty-ness that you’re looking for that these systems don’t have?
(EDIT: I’m digging in on this in part because it seems related to a potential crux that Ajeya and Nate noted here.)
Relative to what I mean by ‘reasoning about messy physical environments at all’, MuZero and Tesla Autopilot don’t count. I could see an argument for GPT-3 counting, but I don’t think it’s in fact doing the thing.
Gotcha, thanks for the follow-up.
Btw, I just wrote up my current thoughts on the path from here to AGI, inspired in part by this discussion. I’d be curious to know where others disagree with my model.