“improve knowledge” here can be “its cognition is more fit to the environment”. Somebody could understand “represent the environment more” which it does not need to be.
With such wide understanding it start to liken to me “the agent isn’t broken” which is not exactly structure-anticipation-limiting.
“improve knowledge” here can be “its cognition is more fit to the environment”. Somebody could understand “represent the environment more” which it does not need to be.
Yes, classical Bayesian decision theory often requires a realizability assumption, which is unrealistic.
With such wide understanding it start to liken to me “the agent isn’t broken” which is not exactly structure-anticipation-limiting.
Realizability is anticipation-limiting but unrealistic.
While EUM captures the core of consequentialism, it does so in a way that is not very computationally feasible and leads to certain paradoxes pushed so far. So yes, EUM is unrealistic. The details are discussed in the embedded agency post.
“improve knowledge” here can be “its cognition is more fit to the environment”. Somebody could understand “represent the environment more” which it does not need to be.
With such wide understanding it start to liken to me “the agent isn’t broken” which is not exactly structure-anticipation-limiting.
Yes, classical Bayesian decision theory often requires a realizability assumption, which is unrealistic.
Realizability is anticipation-limiting but unrealistic.
While EUM captures the core of consequentialism, it does so in a way that is not very computationally feasible and leads to certain paradoxes pushed so far. So yes, EUM is unrealistic. The details are discussed in the embedded agency post.