I think it’s about salience. If you “feel the AGI,” then you’ll automatically remember that transformative AI is a thing that’s probably going to happen, when relevant (e.g. when planning AI strategy, or when making 20-year plans for just about anything). If you don’t feel the AGI, then even if you’ll agree when reminded that transformative AI is a thing that’s probably going to happen, you don’t remember it by default, and you keep making plans (or publishing papers about the economic impacts of AI or whatever) that assume it won’t.
I think it’s about salience. If you “feel the AGI,” then you’ll automatically remember that transformative AI is a thing that’s probably going to happen, when relevant (e.g. when planning AI strategy, or when making 20-year plans for just about anything). If you don’t feel the AGI, then even if you’ll agree when reminded that transformative AI is a thing that’s probably going to happen, you don’t remember it by default, and you keep making plans (or publishing papers about the economic impacts of AI or whatever) that assume it won’t.