I don’t know what this means. If you’re saying “nuclear weapons kill the people they hit”, I don’t see the relevance; guns also kill the people they hit, hut that doesn’t make a gun strategically similar to a smarter-than-human AI system.
It is well known nuclear weapons result in MAD, or localized annihilation. It was still built. But my more important point is this sort of thinking requires most to be convinced there is a high p(doom) and more importantly, also convinced that the other side believes that there is a high p(doom). If either of those are false, then not building doesn’t work. If the other side is building it, then you have to build it anyways just in case your theoretical p(doom) arguments are wrong. Again this is just arguing your way around a pretty basic prisoner’s dilemma.
And think about the fact that we will develop AGIs (note not ASI) anyways and alignment (or at least control) will almost certainly work for them.[1] Prisoner’s dilemma indicates you have to match the drone warfare capabilities of the other side regardless of p(doom).
In the world where the USG understands there are risks but thinks of it closer to something with decent odds of being solvable, we build it anyways. The gameboard is 20% of dying, 80% of handing the light cone to your enemy if the other side builds it and you do not. I think this is the most probable option, making all Pause efforts doomed. High p(doom) folks can’t even convince low p(doom) folks in Lesswrong, the subset of optimists most likely to be receptive to their arguments, that they are wrong. There is no chance you won’t simply be a faction in the USG like environmentalists are.
But let’s pretend for a moment that the USG buys the high risk doomer argument for superintelligence. The USG and CCP are both rushing to build AGIs regardless, since AGI can be controlled and not having a drone swarm means you lose military relevance. Because of how fuzzy the line between ASI and AGI in this world will be, I think it’s very plausible enough people will be convinced the CCP isn’t convinced alignment is too hard and will build it anyways.
Even people with high p(doom)’s might have a nagging part of their mind saying that what if alignment just works. If alignment just works (again this is impossible to disprove since if we could prove /disprove it we wouldn’t need to consider pausing to begin with, it would be self-evident), then great you just handed your entire nation’s future to the enemy.
We have some time to solve alignment, but a long term pause will be downright impossible. What we need to do is tackle the technical problem asap instead of trying to pause. The race conditions are set, the prisoner’s dilemma is locked in.
I think they will certainly work. We have a long history of controlling humans and forcing them to do things that they don’t want to do. Practically every argument about p(doom) relies on the AI being smarter than us. If it’s not, then it’s just an insanely useful tool. All the solutions that sound “dumb” with ASI, like having an off switch, air gapping, etc. work with weak enough but still useful systems.
It is well known nuclear weapons result in MAD, or localized annihilation. It was still built. But my more important point is this sort of thinking requires most to be convinced there is a high p(doom) and more importantly, also convinced that the other side believes that there is a high p(doom). If either of those are false, then not building doesn’t work. If the other side is building it, then you have to build it anyways just in case your theoretical p(doom) arguments are wrong. Again this is just arguing your way around a pretty basic prisoner’s dilemma.
And think about the fact that we will develop AGIs (note not ASI) anyways and alignment (or at least control) will almost certainly work for them.[1] Prisoner’s dilemma indicates you have to match the drone warfare capabilities of the other side regardless of p(doom).
In the world where the USG understands there are risks but thinks of it closer to something with decent odds of being solvable, we build it anyways. The gameboard is 20% of dying, 80% of handing the light cone to your enemy if the other side builds it and you do not. I think this is the most probable option, making all Pause efforts doomed. High p(doom) folks can’t even convince low p(doom) folks in Lesswrong, the subset of optimists most likely to be receptive to their arguments, that they are wrong. There is no chance you won’t simply be a faction in the USG like environmentalists are.
But let’s pretend for a moment that the USG buys the high risk doomer argument for superintelligence. The USG and CCP are both rushing to build AGIs regardless, since AGI can be controlled and not having a drone swarm means you lose military relevance. Because of how fuzzy the line between ASI and AGI in this world will be, I think it’s very plausible enough people will be convinced the CCP isn’t convinced alignment is too hard and will build it anyways.
Even people with high p(doom)’s might have a nagging part of their mind saying that what if alignment just works. If alignment just works (again this is impossible to disprove since if we could prove /disprove it we wouldn’t need to consider pausing to begin with, it would be self-evident), then great you just handed your entire nation’s future to the enemy.
We have some time to solve alignment, but a long term pause will be downright impossible. What we need to do is tackle the technical problem asap instead of trying to pause. The race conditions are set, the prisoner’s dilemma is locked in.
I think they will certainly work. We have a long history of controlling humans and forcing them to do things that they don’t want to do. Practically every argument about p(doom) relies on the AI being smarter than us. If it’s not, then it’s just an insanely useful tool. All the solutions that sound “dumb” with ASI, like having an off switch, air gapping, etc. work with weak enough but still useful systems.