This seems possibly related to “maximize quantity of positive-expectation-feeling” vs “maximize rationally-predicted-expectation of positive-feeling” as expansions of “maximize utility”. For instance, both of them have an “in practice” algorithm and an “in theory” algorithm which give different answers in edge cases. Also, it’s a question of when to calculate utility (present or future). My dilemma is because the expectation of utility doesn’t perfectly correspond to later utility, and yours is because present and future utility functions don’t always resemble each other. I’m not sure how meaningful the connection is, but I thought of my dilemma when considering changing utility functions over time. Hopefully the resolution to one will help the other, then.
This seems possibly related to “maximize quantity of positive-expectation-feeling” vs “maximize rationally-predicted-expectation of positive-feeling” as expansions of “maximize utility”. For instance, both of them have an “in practice” algorithm and an “in theory” algorithm which give different answers in edge cases. Also, it’s a question of when to calculate utility (present or future). My dilemma is because the expectation of utility doesn’t perfectly correspond to later utility, and yours is because present and future utility functions don’t always resemble each other. I’m not sure how meaningful the connection is, but I thought of my dilemma when considering changing utility functions over time. Hopefully the resolution to one will help the other, then.