That is, norms do seem feasible to figure out, but not the kind of thing that is relevant right now, unfortunately.
From the OP:
for most real-world-prevalent perspectives on AI alignment, safety, and existential safety, acausal considerations are not particularly dominant [...]. In particular, I do not think acausal normalcy provides a solution to existential safety, nor does it undermine the importance of existential safety in some surprising way.
I.e., I agree.
we are so unprepared that the existing primordial norms are unlikely to matter for the process of settling our realm into a new equilibrium.
I also agree with that, as a statement about how we normal-everyday-humans seem quite likely to destroy ourselves with AI fairly soon. From the OP:
I strongly suspect that acausal norms are not so compelling that AI technologies would automatically discover and obey them. So, if your aim in reading this post was to find a comprehensive solution to AI safety, I’m sorry to say I don’t think you will find it here.
From the OP:
I.e., I agree.
I also agree with that, as a statement about how we normal-everyday-humans seem quite likely to destroy ourselves with AI fairly soon. From the OP: