Using our vast resources and our superintelligent AI friends
At the point where you have vast resources and superintelligent AI friends, the critical period is already in the past. In order to survive, one needs to be able to align superhuman AI without resources anywhere near that grand.
Yes, obviously. I start the sentence with “Assume we create an aligned superintelligence”. The point of the post is that you can make commitments for the world where we succeed in alignment, that help survive in the worlds where we fail. I thought this was pretty clear from the way I phrase it, but if it’s misunderstandable, please tell me what caused the confusion so I can edit for clarity.
At the point where you have vast resources and superintelligent AI friends, the critical period is already in the past. In order to survive, one needs to be able to align superhuman AI without resources anywhere near that grand.
Yes, obviously. I start the sentence with “Assume we create an aligned superintelligence”. The point of the post is that you can make commitments for the world where we succeed in alignment, that help survive in the worlds where we fail. I thought this was pretty clear from the way I phrase it, but if it’s misunderstandable, please tell me what caused the confusion so I can edit for clarity.