Um, I don’t think accident is a great description. I mean, yeah, the process gets started with an accident or foolish greed which initiates a period of uncontrolled recursive self-improvement, and then humanity is doomed. But humanity is not doomed at that point because of an accident, humanity is doomed at that point because it has allowed to come into existence an overwhelming powerful alien-minded entity capable of killing or enslaving all of humanity at its whim. And because we have reason to suspect, due to reasoning about instrumental goals, that this alien-minded entity will indeed choose to destroy us.
To me, that sounds like war. The thing that makes it scary is the agency of our enemy. Yeah, there’s an accident involved, where a portal to the Dark Realms was opened and a selfish alien god invaded our world through that portal, but ‘accident’ doesn’t seem to quite cover it to me. Especially when the situation is such that the portal is likely to be very deliberately opened by a greedy overconfident human who thinks they will be able to control the alien god and get great personal power from it.
So actually, we’re more like… at war with demon-summoning cultists who think they will get rich but will actually just get everyone killed?
Note: I’m actually pretty convinced that our best bet at survival is opening the portal very very carefully, and studying the aliens beyond, without letting them take actions in our world or perceive us at all. In other words, running powerful AGI within censored simulations which don’t have mention of humans, computers, or human cultural artifacts, or even the same physics as our universe has. In such conditions, I think we can safely study them, and this is our best hope of designing effective methods of alignment in time. The danger is that this is a very costly and unprofitable venture, and the same tools needed to do this allow one to instead undertake the profitable gamble of letting the AI know about and interact with our world and thus risk our lives.
I don’t see the big AI labs as necessarily making the wrong choices here even. The way they see it (I hope) is that step 1 is to race to a powerful enough AI that it can recursively self-improve in secure containment for enough generations that we have something powerful enough to be worth studying in the expensive censored simulation. And in order to fund that venture, you need to use your not-quite-powerful-enough-to-doom-us AI to make a lot of money, and to experiment on, and to help get closer to the truly dangerous AI… It’s certainly a risky gamble to be taking though.
Um, I don’t think accident is a great description. I mean, yeah, the process gets started with an accident or foolish greed which initiates a period of uncontrolled recursive self-improvement, and then humanity is doomed. But humanity is not doomed at that point because of an accident, humanity is doomed at that point because it has allowed to come into existence an overwhelming powerful alien-minded entity capable of killing or enslaving all of humanity at its whim. And because we have reason to suspect, due to reasoning about instrumental goals, that this alien-minded entity will indeed choose to destroy us.
To me, that sounds like war. The thing that makes it scary is the agency of our enemy. Yeah, there’s an accident involved, where a portal to the Dark Realms was opened and a selfish alien god invaded our world through that portal, but ‘accident’ doesn’t seem to quite cover it to me. Especially when the situation is such that the portal is likely to be very deliberately opened by a greedy overconfident human who thinks they will be able to control the alien god and get great personal power from it.
So actually, we’re more like… at war with demon-summoning cultists who think they will get rich but will actually just get everyone killed?
Note: I’m actually pretty convinced that our best bet at survival is opening the portal very very carefully, and studying the aliens beyond, without letting them take actions in our world or perceive us at all. In other words, running powerful AGI within censored simulations which don’t have mention of humans, computers, or human cultural artifacts, or even the same physics as our universe has. In such conditions, I think we can safely study them, and this is our best hope of designing effective methods of alignment in time. The danger is that this is a very costly and unprofitable venture, and the same tools needed to do this allow one to instead undertake the profitable gamble of letting the AI know about and interact with our world and thus risk our lives.
I don’t see the big AI labs as necessarily making the wrong choices here even. The way they see it (I hope) is that step 1 is to race to a powerful enough AI that it can recursively self-improve in secure containment for enough generations that we have something powerful enough to be worth studying in the expensive censored simulation. And in order to fund that venture, you need to use your not-quite-powerful-enough-to-doom-us AI to make a lot of money, and to experiment on, and to help get closer to the truly dangerous AI… It’s certainly a risky gamble to be taking though.