I personally feel happy or sad about the present state of affairs, including expectation of future events (“Oh no, my parachute won’t deploy! I sure am going to hit the ground fast.”). I can call how satisfied I am with the current state of things as I perceive it “utility”. Of course, by using that word, it’s usually assumed that my preferences obey some axioms, e.g. von Neumann-Morgenstern, which I doubt your wrapping satisfies in any meaningful way.
Perhaps there’s some retrospective sense in which I’d talk about the true utility of the actual situation at the time (in hindsight I have a more accurate understanding of how things really were and what the consequences for me would be), but as for my current assessment it is indeed entirely a function of my present mental state (including perceptions and beliefs about the state of the universe salient to me). I think we agree on that.
I’m still not entirely sure I understand the wrapping you described. It feels like it’s too simple to be used for anything.
Perhaps it’s this: given the life story of some individual (call her Ray), you can vacuously (in hindsight) model her decisions with the following story:
1) Ray always acts so that the immediately resulting state of things has the highest expected utility. Ray can be thought of as moving through time and having a utility at each time, which must include some factor for her expectation of her future e.g. health, wealth, etc.
2) Ray is very stupid and forms some arbitrary belief about the result of her actions, expecting with 100% confidence that the predicted future of her life will come to pass. Her expectation in the next moment will usually turn out to revise many things she previously wrongly expected with certainty, i.e. she’s not actually predicting the future exactly.
3) Whatever Ray believed the outcome would be at each choice, she assigned utility 1. To all other possibilities she assigned utility 0.
That’s the sort of fully-described scenario that your proposal evoked in me. If you want to explain how she’s forecasting more than singleton expectation set, and yet the expected utility for each decision she takes magically works out to be 1, I’d enjoy that.
In other words, I don’t see any point modeling intelligent yet not omniscient+deterministic decision making unless the utility at a given state includes an anticipation of expectation of future states.
In other words, I don’t see any point modeling intelligent yet not omniscient+deterministic decision making unless the utility at a given state includes an anticipation of expectation of future states.
There’s no point in discussing “utility maximisers”—rather than “expected utility maximisers”?
I don’t really agree—“utility maximisers” is a simple generalisation of the concept of “expected utility maximiser”. Since there are very many ways of predicting the future, this seems like a useful abstraction to me.
...anyway, if you were wrapping a model a human, the actions would clearly be based on predictions of future events. If you mean you want the prediction process to be abstracted out in the wrapper, obviously there is no easy way to do that.
You could claim that a human—while a “utility maximiser” was not clearly an “expected utility maximiser”. My wrapper doesn’t disprove such a claim. I generally think that the “expected utility maximiser” claim is highly appropriate for a human as well—but there is not such a neat demonstration of this.
Of course, by using that word, it’s usually assumed that my preferences obey some axioms, e.g. von Neumann-Morgenstern, which I doubt your wrapping satisfies in any meaningful way.
I certanly did not intend any such implication. Which set of axioms is using the word “utility” supposed to imply?
Perhaps check with the definition of “utility”. It means something like “goodness” or “value”. There isn’t an obvious implication of any specific set of axioms.
I personally feel happy or sad about the present state of affairs, including expectation of future events (“Oh no, my parachute won’t deploy! I sure am going to hit the ground fast.”). I can call how satisfied I am with the current state of things as I perceive it “utility”. Of course, by using that word, it’s usually assumed that my preferences obey some axioms, e.g. von Neumann-Morgenstern, which I doubt your wrapping satisfies in any meaningful way.
Perhaps there’s some retrospective sense in which I’d talk about the true utility of the actual situation at the time (in hindsight I have a more accurate understanding of how things really were and what the consequences for me would be), but as for my current assessment it is indeed entirely a function of my present mental state (including perceptions and beliefs about the state of the universe salient to me). I think we agree on that.
I’m still not entirely sure I understand the wrapping you described. It feels like it’s too simple to be used for anything.
Perhaps it’s this: given the life story of some individual (call her Ray), you can vacuously (in hindsight) model her decisions with the following story:
1) Ray always acts so that the immediately resulting state of things has the highest expected utility. Ray can be thought of as moving through time and having a utility at each time, which must include some factor for her expectation of her future e.g. health, wealth, etc.
2) Ray is very stupid and forms some arbitrary belief about the result of her actions, expecting with 100% confidence that the predicted future of her life will come to pass. Her expectation in the next moment will usually turn out to revise many things she previously wrongly expected with certainty, i.e. she’s not actually predicting the future exactly.
3) Whatever Ray believed the outcome would be at each choice, she assigned utility 1. To all other possibilities she assigned utility 0.
That’s the sort of fully-described scenario that your proposal evoked in me. If you want to explain how she’s forecasting more than singleton expectation set, and yet the expected utility for each decision she takes magically works out to be 1, I’d enjoy that.
In other words, I don’t see any point modeling intelligent yet not omniscient+deterministic decision making unless the utility at a given state includes an anticipation of expectation of future states.
There’s no point in discussing “utility maximisers”—rather than “expected utility maximisers”?
I don’t really agree—“utility maximisers” is a simple generalisation of the concept of “expected utility maximiser”. Since there are very many ways of predicting the future, this seems like a useful abstraction to me.
...anyway, if you were wrapping a model a human, the actions would clearly be based on predictions of future events. If you mean you want the prediction process to be abstracted out in the wrapper, obviously there is no easy way to do that.
You could claim that a human—while a “utility maximiser” was not clearly an “expected utility maximiser”. My wrapper doesn’t disprove such a claim. I generally think that the “expected utility maximiser” claim is highly appropriate for a human as well—but there is not such a neat demonstration of this.
I certanly did not intend any such implication. Which set of axioms is using the word “utility” supposed to imply?
Perhaps check with the definition of “utility”. It means something like “goodness” or “value”. There isn’t an obvious implication of any specific set of axioms.