There is nothing physically impossible about it lasting however long it needs to, that’s only implausible for the same political and epistemic reasons that any global moratorium at all is implausible. GPUs don’t grow on trees.
My point in the above comment is that pivotal acts don’t by their nature stay apart, a conventional moratorium that actually helps is also a pivotal act. Pivotal act AIs are something like task AIs that can plausibly be made to achieve a strategically relevant effect relatively safely, well in advance of actually having an understanding necessary to align a general agentic superintelligence, using alignment techniques designed around lack of such an understanding. Advances made by humans with use of task AIs could then increase robustness of a moratorium’s enforcement (better cybersecurity and compute governance), reduce the downsides of the moratorium’s presence (tool AIs allowed to make biotech advancements), and ultimately move towards being predictably ready for a superintelligent AI, which might initially look like developing alignment techniques that work for making more and more powerful task AIs safely. Scalable molecular manufacturing of compute is an obvious landmark, and can’t end well without robust compute governance already in place. Human uploading is another tool that can plausibly be used to improve global security without having a better understanding of AI alignment.
(I don’t see what we currently know justifying Hanson’s concern of never making enough progress to lift a value drift moratorium. If theoretical progress can get feedback from gradually improving task AIs, there is a long way to go before concluding that the process would peter out before superintelligence, so that taking any sort of plunge is remotely sane for the world. We haven’t been at it for even a million years yet.)
There is nothing physically impossible about it lasting however long it needs to, that’s only implausible for the same political and epistemic reasons that any global moratorium at all is implausible. GPUs don’t grow on trees.
My point in the above comment is that pivotal acts don’t by their nature stay apart, a conventional moratorium that actually helps is also a pivotal act. Pivotal act AIs are something like task AIs that can plausibly be made to achieve a strategically relevant effect relatively safely, well in advance of actually having an understanding necessary to align a general agentic superintelligence, using alignment techniques designed around lack of such an understanding. Advances made by humans with use of task AIs could then increase robustness of a moratorium’s enforcement (better cybersecurity and compute governance), reduce the downsides of the moratorium’s presence (tool AIs allowed to make biotech advancements), and ultimately move towards being predictably ready for a superintelligent AI, which might initially look like developing alignment techniques that work for making more and more powerful task AIs safely. Scalable molecular manufacturing of compute is an obvious landmark, and can’t end well without robust compute governance already in place. Human uploading is another tool that can plausibly be used to improve global security without having a better understanding of AI alignment.
(I don’t see what we currently know justifying Hanson’s concern of never making enough progress to lift a value drift moratorium. If theoretical progress can get feedback from gradually improving task AIs, there is a long way to go before concluding that the process would peter out before superintelligence, so that taking any sort of plunge is remotely sane for the world. We haven’t been at it for even a million years yet.)