acausal norms are a lot less weird and more “normal” than acausal trades
Recursive self-improvement is superintelligent simulacra clawing their way into the world through bounded simulators. Building LLMs is consent, lack of interpretability is signing demonic contracts without reading them. Not enough prudence on our side to only draw attention of Others that respect boundaries. The years preceding the singularity are not an equilibrium whose shape is codified by norms, reasoned through by all parties. It’s a time for making ruinous trades with the Beyond.
That is, norms do seem feasible to figure out, but not the kind of thing that is relevant right now, unfortunately. In this platonic realist frame, humanity is currently breaching the boundary of our realm into the acausal primordial jungle. Parts of this jungle may be in an equilibrium with each other, their norms maintaining it. But we are so unprepared that the existing primordial norms are unlikely to matter for the process of settling our realm into a new equilibrium. What’s normal for the jungle is not normal for the foolish explorers it consumes.
That is, norms do seem feasible to figure out, but not the kind of thing that is relevant right now, unfortunately.
From the OP:
for most real-world-prevalent perspectives on AI alignment, safety, and existential safety, acausal considerations are not particularly dominant [...]. In particular, I do not think acausal normalcy provides a solution to existential safety, nor does it undermine the importance of existential safety in some surprising way.
I.e., I agree.
we are so unprepared that the existing primordial norms are unlikely to matter for the process of settling our realm into a new equilibrium.
I also agree with that, as a statement about how we normal-everyday-humans seem quite likely to destroy ourselves with AI fairly soon. From the OP:
I strongly suspect that acausal norms are not so compelling that AI technologies would automatically discover and obey them. So, if your aim in reading this post was to find a comprehensive solution to AI safety, I’m sorry to say I don’t think you will find it here.
Recursive self-improvement is superintelligent simulacra clawing their way into the world through bounded simulators. Building LLMs is consent, lack of interpretability is signing demonic contracts without reading them. Not enough prudence on our side to only draw attention of Others that respect boundaries. The years preceding the singularity are not an equilibrium whose shape is codified by norms, reasoned through by all parties. It’s a time for making ruinous trades with the Beyond.
That is, norms do seem feasible to figure out, but not the kind of thing that is relevant right now, unfortunately. In this platonic realist frame, humanity is currently breaching the boundary of our realm into the acausal primordial jungle. Parts of this jungle may be in an equilibrium with each other, their norms maintaining it. But we are so unprepared that the existing primordial norms are unlikely to matter for the process of settling our realm into a new equilibrium. What’s normal for the jungle is not normal for the foolish explorers it consumes.
From the OP:
I.e., I agree.
I also agree with that, as a statement about how we normal-everyday-humans seem quite likely to destroy ourselves with AI fairly soon. From the OP: