It remains totally unclear to me why you demand the world to be such a thing.
Ah, if you don’t see ‘worlds’ as meaning any such thing, then I wonder, are we really arguing about anything at all?
I’m using ‘worlds’ that way in reference to the same general setup which we see in propositions-vs-models in model theory, or in Ω vs the σ-algebra in the Kolmogorov axioms, or in Kripke frames, and perhaps some other places.
We can either start with a basic set of “worlds” (eg, Ω) and define our “propositions” or “events” as sets of worlds, where that proposition/event ‘holds’ or ‘is true’ or ‘occurs’; or, equivalently, we could start with an algebra of propositions/events (like a σ-algebra) and derive worlds as maximally specific choices of which propositions are true and false (or which events hold/occur).
My point is that if U has two output values, then it only needs two possible inputs. Maybe you’re saying that if |dom(U)|=2, then there is no point in having |dom(P)|>2, and maybe you’re right, but I feel no need to make such claims.
Maybe I should just let you tell me what framework you are even using in the first place. There are two main alternatives to the Jeffrey-Bolker framework which I have in mind: the Savage axioms, and also the thing commonly seen in statistics textbooks where you have a probability distribution which obeys the Kolmogorov axioms and then you have random variables over that (random variables being defined as functions of type Ω→R). A utility function is then treated as a random variable.
It doesn’t sound like your notion of utility function is any of those things, so I just don’t know what kind of framework you have in mind.
Maybe I should just let you tell me what framework you are even using in the first place.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
Furthermore, if O={o0,o1}, then I can group the terms into u(o0)P(“we’re in a state where f evaluates to o0”) + u(o1)P(“we’re in a state where f evaluates to o1″), I’m just moving all of the complexity out of EU and into P, which I assume to work by some magic (e.g. LI), that doesn’t involve literally iterating over every possible S.
We can either start with a basic set of “worlds” (eg, Ω) and define our “propositions” or “events” as sets of worlds <...>
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
By the way, I might not see any more replies to this.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
(Just to be clear, I did not write that article.)
I think the interpretation of Savage is pretty subtle. The objects of preference (“outcomes”) and objects of belief (“states”) are treated as distinct sets. But how are we supposed to think about this?
The interpretation Savage seems to imply is that both outcomes and states are “part of the world”, but the agent has somehow segregated parts of the world into matters of belief and matters of preference. But however the agent has done this, it seems to be fundamentally beyond the Savage representation; clearly within Savage, the agent cannot represent meta-beliefs about which matters are matters of belief and which are matters of preference. So this seems pretty weird.
We could instead think of the objects of preference as something like “happiness levels” rather than events in the world. The idea of the representation theorem then becomes that we can peg “happiness levels” to real numbers. In this case, the picture looks more like standard utility functions; S is the domain of the function that gives us our happiness level (which can be represented by a real-valued utility).
Another approach which seems somewhat common is to take the Savage representation but require that S=O. Savage’s “acts” then become maps from world to world, which fits well with other theories of counterfactuals and causal interventions.
So even within a Savage framework, it’s not entirely clear that we would want the domain of the utility function to be different from the domain of the belief function.
I should also have mentioned the super-common VNM picture, where utility has to be a function of arbitrary states as well.
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
The question is, what math-speak is the best representation of the things we actually care about?
Ah, if you don’t see ‘worlds’ as meaning any such thing, then I wonder, are we really arguing about anything at all?
I’m using ‘worlds’ that way in reference to the same general setup which we see in propositions-vs-models in model theory, or in Ω vs the σ-algebra in the Kolmogorov axioms, or in Kripke frames, and perhaps some other places.
We can either start with a basic set of “worlds” (eg, Ω) and define our “propositions” or “events” as sets of worlds, where that proposition/event ‘holds’ or ‘is true’ or ‘occurs’; or, equivalently, we could start with an algebra of propositions/events (like a σ-algebra) and derive worlds as maximally specific choices of which propositions are true and false (or which events hold/occur).
Maybe I should just let you tell me what framework you are even using in the first place. There are two main alternatives to the Jeffrey-Bolker framework which I have in mind: the Savage axioms, and also the thing commonly seen in statistics textbooks where you have a probability distribution which obeys the Kolmogorov axioms and then you have random variables over that (random variables being defined as functions of type Ω→R). A utility function is then treated as a random variable.
It doesn’t sound like your notion of utility function is any of those things, so I just don’t know what kind of framework you have in mind.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
Furthermore, if O={o0,o1}, then I can group the terms into u(o0)P(“we’re in a state where f evaluates to o0”) + u(o1)P(“we’re in a state where f evaluates to o1″), I’m just moving all of the complexity out of EU and into P, which I assume to work by some magic (e.g. LI), that doesn’t involve literally iterating over every possible S.
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
By the way, I might not see any more replies to this.
(Just to be clear, I did not write that article.)
I think the interpretation of Savage is pretty subtle. The objects of preference (“outcomes”) and objects of belief (“states”) are treated as distinct sets. But how are we supposed to think about this?
The interpretation Savage seems to imply is that both outcomes and states are “part of the world”, but the agent has somehow segregated parts of the world into matters of belief and matters of preference. But however the agent has done this, it seems to be fundamentally beyond the Savage representation; clearly within Savage, the agent cannot represent meta-beliefs about which matters are matters of belief and which are matters of preference. So this seems pretty weird.
We could instead think of the objects of preference as something like “happiness levels” rather than events in the world. The idea of the representation theorem then becomes that we can peg “happiness levels” to real numbers. In this case, the picture looks more like standard utility functions; S is the domain of the function that gives us our happiness level (which can be represented by a real-valued utility).
Another approach which seems somewhat common is to take the Savage representation but require that S=O. Savage’s “acts” then become maps from world to world, which fits well with other theories of counterfactuals and causal interventions.
So even within a Savage framework, it’s not entirely clear that we would want the domain of the utility function to be different from the domain of the belief function.
I should also have mentioned the super-common VNM picture, where utility has to be a function of arbitrary states as well.
The question is, what math-speak is the best representation of the things we actually care about?