I don’t personally work in AI. But Open AI for example states clearly in its own goals that they aim at building AGI, and Sam Altman wrote a whole post called “Moore’s Law for Everything” in which he outlines his vision for an AGI future. I consider it naïve nonsense, personally, but the drive seems to be simply the idea of a utopian world of abundance and technological development going faster and faster as AGI makes itself smarter.
EDIT: sorry, didn’t realise you weren’t answering to me, so my answer doesn’t make a lot of sense. Still, gonna leave it here.
I don’t personally work in AI. But Open AI for example states clearly in its own goals that they aim at building AGI, and Sam Altman wrote a whole post called “Moore’s Law for Everything” in which he outlines his vision for an AGI future. I consider it naïve nonsense, personally, but the drive seems to be simply the idea of a utopian world of abundance and technological development going faster and faster as AGI makes itself smarter.
EDIT: sorry, didn’t realise you weren’t answering to me, so my answer doesn’t make a lot of sense. Still, gonna leave it here.