It seems very likely (~96%) to me that scale is not in fact all that is required to go from current frontier models to AGI, such that GPT-8 (say) will still not be superintelligent and a near-perfect predictor or generator of text, just because of what largely boils down to a difference of scale and not a difference of kind or of underlying conceptual model; I consider it more likely that we’ll get AGI ~30 years out but that we’ll have to get alignment precisely right.
You might want to gesture at why this seems likely to you, since AFAICT this is a minority view.
You might want to gesture at why this seems likely to you, since AFAICT this is a minority view.
writing a bit about this now.