I think talking of “loss minimizing” is conflating two different things here. Minimizing training loss is alignment of the model with the alignment target given by the training dataset. But the Alzheimer’s example is not about that, it’s about some sort of reflective equilibrium loss, harmony between the model and hypothetical queries it could in principle encounter but didn’t on the trainings dataset. The latter is also a measure of robustness.
Prompt-conditioned behaviors of a model (in particular, behaviors conditioned by presence of a word, or name of a character) could themselves be thought of as models, represented in the outer unconditioned model. These specialized models (trying to channel particular concepts) are not necessarily adequately trained, especially if they specialize in phenomena that were not explored in the episodes of the training dataset. The implied loss for an individual concept (specialized prompt-conditioned model) compares the episodes generated in its scope by all the other concepts of the outer model, to the sensibilities of the concept. Reflection reduces this internal alignment loss by rectifying the episodes (bargaining with the other concepts), changing the concept to anticipate the episodes’ persisting deformities, or by shifting the concept’s scope to pay attention to different episodes. With enough reflection, a concept is only invoked in contexts to which it’s robust, where its intuitive model-channeled guidance is coherent across the episodes of its reflectively settled scope, providing acausal coordination among these episodes in its role as an adjudicator, expressing its preferences.
So this makes a distinction between search and reflection in responding to a novel query, where reflection might involve some sort of search (as part of amplification), but its results won’t be robustly aligned before reflective equilibrium for the relevant concepts is established.
I think talking of “loss minimizing” is conflating two different things here. Minimizing training loss is alignment of the model with the alignment target given by the training dataset. But the Alzheimer’s example is not about that, it’s about some sort of reflective equilibrium loss, harmony between the model and hypothetical queries it could in principle encounter but didn’t on the trainings dataset. The latter is also a measure of robustness.
Prompt-conditioned behaviors of a model (in particular, behaviors conditioned by presence of a word, or name of a character) could themselves be thought of as models, represented in the outer unconditioned model. These specialized models (trying to channel particular concepts) are not necessarily adequately trained, especially if they specialize in phenomena that were not explored in the episodes of the training dataset. The implied loss for an individual concept (specialized prompt-conditioned model) compares the episodes generated in its scope by all the other concepts of the outer model, to the sensibilities of the concept. Reflection reduces this internal alignment loss by rectifying the episodes (bargaining with the other concepts), changing the concept to anticipate the episodes’ persisting deformities, or by shifting the concept’s scope to pay attention to different episodes. With enough reflection, a concept is only invoked in contexts to which it’s robust, where its intuitive model-channeled guidance is coherent across the episodes of its reflectively settled scope, providing acausal coordination among these episodes in its role as an adjudicator, expressing its preferences.
So this makes a distinction between search and reflection in responding to a novel query, where reflection might involve some sort of search (as part of amplification), but its results won’t be robustly aligned before reflective equilibrium for the relevant concepts is established.