Under an Active Inference perspective, it is little surprising, that we use the same concepts for [Expecting something to happen], and [Trying to steer towards something happenig], as they are the same thing happening in our brain.
I don’t know enough about this know, whether the active inference paradigm predicts, that this similarity on a neuronal level plays out as humans using similar language to describe the two phenomena, but if it does the common use of this “beliving in”—concept might count as evidence in its favour.
I think a better active-inference-inspired perspective that fits well with the distinction Anna is trying to make here is that of representing preferences as probability distributions over state/observation trajectories, the idea being that one assigns high “belief in” probabilities to trajectories that are more desirable. This “preference distribution” is distinct from the agent’s “prediction distribution”, which tries to anticipate and explain outcomes as accurately as possible. Active Inference is then cast as the process of minimising the KL divergence between these two distributions.
A couple of pointers which articulate this idea very nicely in different contexts:
Under an Active Inference perspective, it is little surprising, that we use the same concepts for [Expecting something to happen], and [Trying to steer towards something happenig], as they are the same thing happening in our brain.
I don’t know enough about this know, whether the active inference paradigm predicts, that this similarity on a neuronal level plays out as humans using similar language to describe the two phenomena, but if it does the common use of this “beliving in”—concept might count as evidence in its favour.
I think a better active-inference-inspired perspective that fits well with the distinction Anna is trying to make here is that of representing preferences as probability distributions over state/observation trajectories, the idea being that one assigns high “belief in” probabilities to trajectories that are more desirable. This “preference distribution” is distinct from the agent’s “prediction distribution”, which tries to anticipate and explain outcomes as accurately as possible. Active Inference is then cast as the process of minimising the KL divergence between these two distributions.
A couple of pointers which articulate this idea very nicely in different contexts:
Action and Perception as Divergence Minimization—https://arxiv.org/abs/2009.01791
Whence the Expected Free Energy—https://arxiv.org/abs/2004.08128
Alex Alemi’s brilliant talk at NeurIPS—https://nips.cc/virtual/2023/73986