I’d argue that’s (in a VNM-rational agent) not changing a utility function, but simply following it—maximizing utility via trade or prediction-of-prediction calculations.
There are probably theoretical cases where real-world agents might alter their preferences (warning: don’t update too much on fiction. anti-warning: this is a fun read. https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT, chapter 5). These are not perfectly rational agents (edit: or maybe they are, but it’s not clear how “utility” and “preferences” are interacting in this case).
You can modify your utility function as part of a bargaining or pre commitment strategy.
I’d argue that’s (in a VNM-rational agent) not changing a utility function, but simply following it—maximizing utility via trade or prediction-of-prediction calculations.
There are probably theoretical cases where real-world agents might alter their preferences (warning: don’t update too much on fiction. anti-warning: this is a fun read. https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT, chapter 5). These are not perfectly rational agents (edit: or maybe they are, but it’s not clear how “utility” and “preferences” are interacting in this case).