If the AI is modeling the real world, then it might in some ways care about it
I am not convinced at all that this is true. Consider an AI whose training objective simply makes it want to model how the world works as well as possible, like a pure scientist which is not trying to acquire more knowledge via experiments but only reasons and explores explanatory hypotheses to build a distribution over theories of the observed data. It is agency and utilities or rewards that induce a preference over certain states of the world.
I do think this part is speculative. The degree of “inner alignment” to the training objective depends on the details.
Partly the degree to which “try to model the world well” leads to real-world agency depends on the details of this objective. For example, doing a scientific experiment would result in understanding the world better, and if there’s RL training towards “better understand the world”, that could propagate to intending to carry out experiments that increase understanding of the world, which is a real-world objective.
If, instead, the AI’s dataset is fixed and it’s trying to find a good compression of it, that’s less directly a real-world objective. However, depending on the training objective, the AI might get a reward from thinking certain thoughts that would result in discovering something about how to compress the dataset better. This would be “consequentialism” at least within a limited, computational domain.
An overall reason for thinking it’s at least uncertain whether AIs that model the world would care about it is that an AI that did care about the world would, as an instrumental goal, compliantly solve its training problems and some test problems (before it has the capacity for a treacherous turn). So, good short-term performance doesn’t by itself say much about goal-directed behavior in generalizations.
The distribution of goals with respect to generalization, therefore, depends on things like which mind-designs are easier to find by the search/optimization algorithm. It seems pretty uncertain to me whether agents with general goals might be “simpler” than agents with task-specific goals (it probably depends on the task), therefore easier to find while getting ~equivalent performance. I do think that gradient descent is relatively more likely to find inner-aligned agents (with task-specific goals), because the internal parts are gradient descended towards task performance, it’s not just a black box search.
Yudkowsky mentions evolution as an argument that inner alignment can’t be assumed. I think there are quite a lot of dis-analogies between evolution and ML, but the general point that some training processes result in agents whose goals aren’t aligned with the training objective holds. I think, in particular, supervised learning systems like LLMs are unlikely to exhibit this, as explained in the section on myopic agents.
I am not convinced at all that this is true. Consider an AI whose training objective simply makes it want to model how the world works as well as possible, like a pure scientist which is not trying to acquire more knowledge via experiments but only reasons and explores explanatory hypotheses to build a distribution over theories of the observed data. It is agency and utilities or rewards that induce a preference over certain states of the world.
I do think this part is speculative. The degree of “inner alignment” to the training objective depends on the details.
Partly the degree to which “try to model the world well” leads to real-world agency depends on the details of this objective. For example, doing a scientific experiment would result in understanding the world better, and if there’s RL training towards “better understand the world”, that could propagate to intending to carry out experiments that increase understanding of the world, which is a real-world objective.
If, instead, the AI’s dataset is fixed and it’s trying to find a good compression of it, that’s less directly a real-world objective. However, depending on the training objective, the AI might get a reward from thinking certain thoughts that would result in discovering something about how to compress the dataset better. This would be “consequentialism” at least within a limited, computational domain.
An overall reason for thinking it’s at least uncertain whether AIs that model the world would care about it is that an AI that did care about the world would, as an instrumental goal, compliantly solve its training problems and some test problems (before it has the capacity for a treacherous turn). So, good short-term performance doesn’t by itself say much about goal-directed behavior in generalizations.
The distribution of goals with respect to generalization, therefore, depends on things like which mind-designs are easier to find by the search/optimization algorithm. It seems pretty uncertain to me whether agents with general goals might be “simpler” than agents with task-specific goals (it probably depends on the task), therefore easier to find while getting ~equivalent performance. I do think that gradient descent is relatively more likely to find inner-aligned agents (with task-specific goals), because the internal parts are gradient descended towards task performance, it’s not just a black box search.
Yudkowsky mentions evolution as an argument that inner alignment can’t be assumed. I think there are quite a lot of dis-analogies between evolution and ML, but the general point that some training processes result in agents whose goals aren’t aligned with the training objective holds. I think, in particular, supervised learning systems like LLMs are unlikely to exhibit this, as explained in the section on myopic agents.