I think you are missing the possibility that the outcomes of the pivotal process could be -no one builds autonomous AGI -autonomos AGI is build only in post-pivotal outcome states, where the condition of building it is alignment being solved
Sure, that’s true—but in that case the entire argument should be put in terms of: We can (aim to) implement a pivotal process before a unilateral AGI-assisted pivotal act is possible.
And I imagine the issue there would all be around the feasibility of implementation. I think I’d give a Manhattan project to solve the technical problem much higher chances than a pivotal process. (of course people should think about it—I just won’t expect them to come up with anything viable)
Once it’s possible, the attitude of the creating org before interacting with their AGI is likely to be irrelevant.
So e.g. this just seems silly to me:
So, thankfully-according-to-me, no currently-successful AGI labs are oriented on carrying out pivotal acts, at least not all on their own.
They won’t be on their own: they’ll have an AGI to set them straight on what will/won’t work.
I think you are missing the possibility that the outcomes of the pivotal process could be
-no one builds autonomous AGI
-autonomos AGI is build only in post-pivotal outcome states, where the condition of building it is alignment being solved
Sure, that’s true—but in that case the entire argument should be put in terms of:
We can (aim to) implement a pivotal process before a unilateral AGI-assisted pivotal act is possible.
And I imagine the issue there would all be around the feasibility of implementation. I think I’d give a Manhattan project to solve the technical problem much higher chances than a pivotal process. (of course people should think about it—I just won’t expect them to come up with anything viable)
Once it’s possible, the attitude of the creating org before interacting with their AGI is likely to be irrelevant.
So e.g. this just seems silly to me:
They won’t be on their own: they’ll have an AGI to set them straight on what will/won’t work.