Sorry, I guess I didn’t make the connection to your post clear. I substantially agree with you that utility functions over agent-states aren’t rich enough to model real behavior. (Except, maybe, at a very abstract level, a la predictive processing? (which I don’t understand well enough to make the connection precise)).
Utility functions over world-states—which is what I thought you meant by ‘states’ at first—are in some sense richer, but I still think inadequate.
And I agree that utility functions over agent histories are too flexible.
I was sort of jumping off to a different way to look at value, which might have both some of the desirable coherence of the utility-function-over-states framing, but without its rigidity.
And this way is something like, viewing ‘what you value’ or ‘what is good’ as something abstract, something to be inferred, out of the many partial glimpses of it we have in the form of our extant values.
Sorry, I guess I didn’t make the connection to your post clear. I substantially agree with you that utility functions over agent-states aren’t rich enough to model real behavior. (Except, maybe, at a very abstract level, a la predictive processing? (which I don’t understand well enough to make the connection precise)).
Utility functions over world-states—which is what I thought you meant by ‘states’ at first—are in some sense richer, but I still think inadequate.
And I agree that utility functions over agent histories are too flexible.
I was sort of jumping off to a different way to look at value, which might have both some of the desirable coherence of the utility-function-over-states framing, but without its rigidity.
And this way is something like, viewing ‘what you value’ or ‘what is good’ as something abstract, something to be inferred, out of the many partial glimpses of it we have in the form of our extant values.