I claim that a given random program, regardless of whether it explicitly predicts the future, is unlikely to have the kind of motivational structure that would exhibit instrumental convergence.
Yes, I understand that. What I’m more interested in knowing, however, is how this statement connects to AI alignment in your view, since any AI created in the real world will certainly not be “random”.
Yes, I understand that. What I’m more interested in knowing, however, is how this statement connects to AI alignment in your view, since any AI created in the real world will certainly not be “random”.