I think you are describing an important distinction. The main argument that convinces me that I actually have a utility function (i.e. a function whose expectation I am trying to maximize) is von Neumann-Morgenstern, since I do try to conform to their rationality axioms. This utility is a function defined on options, not on perceived outcomes, so from this perspective, utility by definition is something you optimize expectation of, not something you optimize expected perception of (unless your preferences happen to only depend on your future perceptions). If their axioms were rephrased entirely in terms of my future perceptions, I would not be intentionally not following them in thought experiments involving amnesia for example.
I think you are describing an important distinction. The main argument that convinces me that I actually have a utility function (i.e. a function whose expectation I am trying to maximize) is von Neumann-Morgenstern, since I do try to conform to their rationality axioms. This utility is a function defined on options, not on perceived outcomes, so from this perspective, utility by definition is something you optimize expectation of, not something you optimize expected perception of (unless your preferences happen to only depend on your future perceptions). If their axioms were rephrased entirely in terms of my future perceptions, I would not be intentionally not following them in thought experiments involving amnesia for example.