In my opinion the relevant detail is that we were not able to prevent the Soviets from getting the bomb. It took them all of about 3 years. It’ll take China, Russia, open source hackers et al about 18 months max to replicate AGI once it arrives. So much for your decisive strategic advantage.
Nuclear technology is hours away from reducing the value of all human civilization by 10%, and for all we knew that figure could have been 100%. That’s the nuclear threat. I wouldn’t even classify that as a “geopolitical” threat. The fact that Soviet nuclear technology pretty quickly became comparable to US nuclear technology isn’t the most salient fact in the story. The story is that research got really close, and is still really close, to releasing hell, and the door to hell looks generally pretty easy to open.
18 months is more than enough to get a DSA if AGI turns out anything we fear (that is, something really powerful and difficult to control, probably arriving fast at such state through an intelligence explosion).
In fact, I’d even argue 18 days might be enough. AI is already beginning to solve protein folding (Alphafold). If it progresses from there and builds a nanosystem, that’s more than enough to get a DSA aka take over the world. We currently see AIs like MuZero learning in hours what would take a lifetime for a human to learn, so it wouldn’t surprise me an advanced AI solving advanced nanotech in a few days.
Whether the first AGI will be aligned or not is way more concerning. Not because who gets there first isn’t also extremely important. Only because getting there first is the “easy” part.
I don’t really think advanced AI can be compared to atomic bombs. The former is a way more explosive technology, pun intended.
In my opinion the relevant detail is that we were not able to prevent the Soviets from getting the bomb. It took them all of about 3 years. It’ll take China, Russia, open source hackers et al about 18 months max to replicate AGI once it arrives. So much for your decisive strategic advantage.
Nuclear technology is hours away from reducing the value of all human civilization by 10%, and for all we knew that figure could have been 100%. That’s the nuclear threat. I wouldn’t even classify that as a “geopolitical” threat. The fact that Soviet nuclear technology pretty quickly became comparable to US nuclear technology isn’t the most salient fact in the story. The story is that research got really close, and is still really close, to releasing hell, and the door to hell looks generally pretty easy to open.
18 months is more than enough to get a DSA if AGI turns out anything we fear (that is, something really powerful and difficult to control, probably arriving fast at such state through an intelligence explosion).
In fact, I’d even argue 18 days might be enough. AI is already beginning to solve protein folding (Alphafold). If it progresses from there and builds a nanosystem, that’s more than enough to get a DSA aka take over the world. We currently see AIs like MuZero learning in hours what would take a lifetime for a human to learn, so it wouldn’t surprise me an advanced AI solving advanced nanotech in a few days.
Whether the first AGI will be aligned or not is way more concerning. Not because who gets there first isn’t also extremely important. Only because getting there first is the “easy” part.
I don’t really think advanced AI can be compared to atomic bombs. The former is a way more explosive technology, pun intended.