Choosement is not part of my definition of consequentialism.
Searching based on consequences is part of it, and you are right that in the real world you would want to update your model based on new data you learn. In the EUM framework, these updates are captured by Bayesian conditioning. There are other frameworks which capture the updates in other ways, but the basic points are basically the same.
Linking to totalities of very long posts have downsides comparable to writing wall-of-text replies.
I understand how “searching” can fail to be choosement when it ends up being “solving algebraicly” without actually checking any values of the open variables.
Going from abstract descriptions to more and more concrete solutions is not coupled how many elementary ground-level-concrete solutions get disregarded so it can be fast. I thought part of the worryingness of “checks every option” is that it doesn’t get fooled by faulty (or non-existent) abstractions
So to me it is surprising that an agent that never considers alternative avenues gets under the umbrella “consequentialist”. So an agent that changes policy if it is in pain and keeps policy if it feels pleasure, “is consequentialist” based on that its policy was caused by life-events, even if the policy is pure reflex.
There were vibes also to the effect of “this gets me what I want” is a consequentialist stance because of appearance of “gets”. So
Well so you are right that a functioning consequentialist must either magically have perfect knowledge, or must have some way of observing and understanding the world to improve its knowledge. Since magic isn’t real, in reality for advanced capable agents it must be the latter.
In the EUM framework, the observations and improvements in understanding are captured by Bayesian updating. In different frameworks, it may be captured by different things.
“improve knowledge” here can be “its cognition is more fit to the environment”. Somebody could understand “represent the environment more” which it does not need to be.
With such wide understanding it start to liken to me “the agent isn’t broken” which is not exactly structure-anticipation-limiting.
“improve knowledge” here can be “its cognition is more fit to the environment”. Somebody could understand “represent the environment more” which it does not need to be.
Yes, classical Bayesian decision theory often requires a realizability assumption, which is unrealistic.
With such wide understanding it start to liken to me “the agent isn’t broken” which is not exactly structure-anticipation-limiting.
Realizability is anticipation-limiting but unrealistic.
While EUM captures the core of consequentialism, it does so in a way that is not very computationally feasible and leads to certain paradoxes pushed so far. So yes, EUM is unrealistic. The details are discussed in the embedded agency post.
Choosement is not part of my definition of consequentialism.
Searching based on consequences is part of it, and you are right that in the real world you would want to update your model based on new data you learn. In the EUM framework, these updates are captured by Bayesian conditioning. There are other frameworks which capture the updates in other ways, but the basic points are basically the same.
How does “searching based on consequences” fail to ever use choosement?
The possibility of alternatives to choosement is discussed here.
Linking to totalities of very long posts have downsides comparable to writing wall-of-text replies.
I understand how “searching” can fail to be choosement when it ends up being “solving algebraicly” without actually checking any values of the open variables.
Going from abstract descriptions to more and more concrete solutions is not coupled how many elementary ground-level-concrete solutions get disregarded so it can be fast. I thought part of the worryingness of “checks every option” is that it doesn’t get fooled by faulty (or non-existent) abstractions
So to me it is surprising that an agent that never considers alternative avenues gets under the umbrella “consequentialist”. So an agent that changes policy if it is in pain and keeps policy if it feels pleasure, “is consequentialist” based on that its policy was caused by life-events, even if the policy is pure reflex.
There were vibes also to the effect of “this gets me what I want” is a consequentialist stance because of appearance of “gets”. So
is consequentialist because it projects winning.
Well so you are right that a functioning consequentialist must either magically have perfect knowledge, or must have some way of observing and understanding the world to improve its knowledge. Since magic isn’t real, in reality for advanced capable agents it must be the latter.
In the EUM framework, the observations and improvements in understanding are captured by Bayesian updating. In different frameworks, it may be captured by different things.
“improve knowledge” here can be “its cognition is more fit to the environment”. Somebody could understand “represent the environment more” which it does not need to be.
With such wide understanding it start to liken to me “the agent isn’t broken” which is not exactly structure-anticipation-limiting.
Yes, classical Bayesian decision theory often requires a realizability assumption, which is unrealistic.
Realizability is anticipation-limiting but unrealistic.
While EUM captures the core of consequentialism, it does so in a way that is not very computationally feasible and leads to certain paradoxes pushed so far. So yes, EUM is unrealistic. The details are discussed in the embedded agency post.