You are exactly right that active inference models who behave in self-interest or any coherently goal-directed way must have something like an optimism bias.
My guess about what happens in animals and to some extent humans: part of the ‘sensory inputs’ are interoceptive, tracking internal body variables like temperature, glucose levels, hormone levels, etc. Evolution already built a ton of ‘control theory type cirquits’ on the bodies (an extremely impressive optimization task is even how to build a body from a single cell...). This evolutionary older circuitry likely encodes a lot about what the evolution ‘hopes for’ in terms of what states the body will occupy. Subsequently, when building predictive/innocent models and turning them into active inference, my guess a lot of the specification is done by ‘fixing priors’ of interoceptive inputs on values like ‘not being hungry’. The later learned structures than also become a mix between beliefs and goals: e.g. the fixed prior on my body temperature during my lifetime leads to a model where I get ‘prior’ about wearing a waterproof jacket when it rains, which becomes something between an optimistic belief and ‘preference’. (This retrodicts a lot of human biases could be explained as “beliefs” somewhere between “how things are” and “how it would be nice if they were”)
But this suggests an approach to aligning embedded simulator-like models: Induce an optimism bias such that the model believes everything will turn out fine (according to our true values)
My current guess is any approach to alignment which will actually lead to good outcomes must include some features suggested by active inference. E.g. active inference suggests something like ‘aligned’ agent which is trying to help me likely ‘cares’ about my ‘predictions’ coming true, and has some ‘fixed priors’ about me liking the results. Which gives me something avoiding both ‘my wishes were satisfied, but in bizarre goodharted ways’ and ‘this can do more than I can’
You are exactly right that active inference models who behave in self-interest or any coherently goal-directed way must have something like an optimism bias.
My guess about what happens in animals and to some extent humans: part of the ‘sensory inputs’ are interoceptive, tracking internal body variables like temperature, glucose levels, hormone levels, etc. Evolution already built a ton of ‘control theory type cirquits’ on the bodies (an extremely impressive optimization task is even how to build a body from a single cell...). This evolutionary older circuitry likely encodes a lot about what the evolution ‘hopes for’ in terms of what states the body will occupy. Subsequently, when building predictive/innocent models and turning them into active inference, my guess a lot of the specification is done by ‘fixing priors’ of interoceptive inputs on values like ‘not being hungry’. The later learned structures than also become a mix between beliefs and goals: e.g. the fixed prior on my body temperature during my lifetime leads to a model where I get ‘prior’ about wearing a waterproof jacket when it rains, which becomes something between an optimistic belief and ‘preference’. (This retrodicts a lot of human biases could be explained as “beliefs” somewhere between “how things are” and “how it would be nice if they were”)
But this suggests an approach to aligning embedded simulator-like models: Induce an optimism bias such that the model believes everything will turn out fine (according to our true values)
My current guess is any approach to alignment which will actually lead to good outcomes must include some features suggested by active inference. E.g. active inference suggests something like ‘aligned’ agent which is trying to help me likely ‘cares’ about my ‘predictions’ coming true, and has some ‘fixed priors’ about me liking the results. Which gives me something avoiding both ‘my wishes were satisfied, but in bizarre goodharted ways’ and ‘this can do more than I can’