Why is AGI/​ASI Inevitable?

Hello! My name is Amy.

This is my first LessWrong post. I’m about somewhat certain it will be deleted, but I’m giving it a shot anyway, because I’ve seen this argument thrown around a few places and I still don’t understand. I’ve read a few chunks of the Sequences, and the fundamentals of rationality sequences.

What makes artificial general intelligence ‘inevitable’? What makes artificial superintelligence ‘inevitable’? Can’t people decide simply not to build AGI/​ASI?

I’m very, very new to this whole scene, and while I’m personally convinced AGI/​ASI is coming, I haven’t really been convinced it’s inevitable, the way so many people online (mostly Twitter!) seem convinced.

While I’d appreciate to hear your thoughts, what I’d really love is to get some sources on this. What are the best sequences to read on this topic? Are there any studies or articles which make this argument?

Or is this all just some ridiculous claim those ‘e/​acc’ people cling to?

Hope this doesn’t get deleted! Thank you for your help!