For purposes of this post, I am defining AGI as something that can (i) outperform average trained humans on 90% of tasks and (ii) will not routinely produce clearly false or incoherent answers.
Based on this definition, it seems like AGI almost or already exists. ChatGPT is arguably already an AGI because it can, for example, score 1000 on the SAT which is at the average human level.
I think a better definition would be a model that can outperform professionals at most tasks. For example, a model that’s better at writing than a New York Times human writer.
To be sure, I think the chance that AGI will be developed before January 1, 2029 is still low, on the order of 3% or so; but there is a pretty vast difference between small but measurable and “not going to happen”.
Even if one doesn’t believe ChatGPT is an AGI, it doesn’t seem like we need much additional progress to create a model that can outperform the average human at most tasks.
I personally think there is a ~50% chance of this level of AGI being achieved by 2030.
Based on this definition, it seems like AGI almost or already exists. ChatGPT is arguably already an AGI because it can, for example, score 1000 on the SAT which is at the average human level.
I think a better definition would be a model that can outperform professionals at most tasks. For example, a model that’s better at writing than a New York Times human writer.
Even if one doesn’t believe ChatGPT is an AGI, it doesn’t seem like we need much additional progress to create a model that can outperform the average human at most tasks.
I personally think there is a ~50% chance of this level of AGI being achieved by 2030.