Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?
Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:
even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-level reasoning != AI researchers are out of a job
Unrelatedly, you should probably label your comment “aside.” [edit: I don’t endorse this remark anymore.]
It was meant as a submission, except that I couldn’t be bothered to actually implement my distribution on that website :) - even/especially after superintelligent AI, researchers might come to the conclusion that we weren’t prepared and *shouldn’t* build another—regardless of whether the existing sovereign would allow it.
The third question is
Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:
Unrelatedly, you should probably label your comment “aside.” [edit: I don’t endorse this remark anymore.]
It was meant as a submission, except that I couldn’t be bothered to actually implement my distribution on that website :) - even/especially after superintelligent AI, researchers might come to the conclusion that we weren’t prepared and *shouldn’t* build another—regardless of whether the existing sovereign would allow it.