I agree with the main claim of this post, mostly because I came to the same conclusion several years ago and have yet to have my mind changed away from it in the intervening time. If anything, I’m even more sure that values are after-the-fact reifications that attempt to describe why we behave the way we do.
If anything, I’m even more sure that values are after-the-fact reifications that attempt to describe why we behave the way we do.
Uhh… that is not a claim this post is making.
This post didn’t talk about decision making or planning, but (adopting a Bayesian frame for legibility) the rough picture is that decisions are made by maximizing expected utility as usual, where the expectation averages over uncertainty in values just like uncertainty in everything else.
The “values” themselves are reifications of rewards, not of behavior. And they are not “after” behavior, they are (implicitly) involved in the decision making loop.
I agree with the main claim of this post, mostly because I came to the same conclusion several years ago and have yet to have my mind changed away from it in the intervening time. If anything, I’m even more sure that values are after-the-fact reifications that attempt to describe why we behave the way we do.
Uhh… that is not a claim this post is making.
This post didn’t talk about decision making or planning, but (adopting a Bayesian frame for legibility) the rough picture is that decisions are made by maximizing expected utility as usual, where the expectation averages over uncertainty in values just like uncertainty in everything else.
The “values” themselves are reifications of rewards, not of behavior. And they are not “after” behavior, they are (implicitly) involved in the decision making loop.