My point is only that U is also reasonable, and possibly equivalent or more general. That there is no “case against” it.
I do agree that my post didn’t do a very good job of delivering a case against utility functions, and actually only argues that there exists a plausibly-more-useful alternative to a specific view which includes utility functions as one of several elements.
Utility functions definitely aren’t more general.
A classical probability distribution over Ω with a utility function understood as a random variable can easily be converted to the Jeffrey-Bolker framework, by taking the JB algebra as the sigma-algebra, and V as the expected value of U. Technically the sigma-algebra needs to be atomless to fit JB exactly, but Zoltan Domotor (Axiomatization of Jeffrey Utilities) generalizes this considerably.
I’ve heard people say that there is a way to convert in the other direction, but that it requires ultrafilters (so in some sense it’s very non-constructive). I haven’t been able to find this construction yet or had anyone explain how it works.
So it seems to me, but I recognize that I haven’t shown in detail, that the space of computable values is strictly broader in the JB framework; computable utility functions + computable probability gives us computable JB-values, but computable JB-values need not correspond to computable utility functions.
Thus, the space of minds which can be described by the two frameworks might be equivalent, but the space of minds which can be described by computations does not seem to be; the JB space, there, is larger.
I don’t see why any “good” utility function should be uncomputable.
Well, the Jeffrey-Bolker kind of explanation is as follows: agents really only need to consider and manipulate the probabilities and expected values of events (ie, propositions in the agent’s internal language). So it makes some sense to assume that these probabilities and expected values are computable. But this does not imply (as far as I know) that we can construct ‘worlds’ as maximal specifications of which propositions are true/false and then define a utility function on those worlds which is consistent with the computable expected values and have that utility function itself be computable. And indeed it seems rather plausible to me that this is not the case, even for values which otherwise seem relatively unremarkable, as illustrated by examples like the procrastination paradox.
I think there is a good reason to imagine that the agent structures its ontology around its perceptions. The agent cannot observe whether-the-button-is-ever-pressed; it can only observe, on a given day, whether the button has been pressed on that day. |Omega|=2 is too small to even represent such perceptions.
I agree with the first sentence, however Omega is merely the domain of U, it does not need to be the entire ontology. In this case Omega={”button has been pressed”, “button has not been pressed”} and P(“button has been pressed” | “I’m pressing the button”)~1. Obviously, there is also no problem with extending Omega with the perceptions, all the way up to |Omega|=4, or with adding some clocks.
I’m not sure why you say Omega can be the domain of U but not the entire ontology. This seems to mean that we don’t know how to take expected values for arbitrary events. Also it means you are no longer advocating for the model I’m arguing against, where U is a random variable.
We could expand the scenario so that every “day” is represented by an n-bit string.
If you want to force the agent to remember the entire history of the world, then you’ll run out of storage space before you need to worry about computability. A real agent would have to start forgetting days, or keep some compressed summary of that history. It seems to me that Jeffrey would “update” the daily utilities into total expected utility; in that case, U can do something similar.
I agree that we can put even more stringent (and realistic) requirements on the computational power of the agent, and then both JB and random-variable treatments become implausible, in so far as those treatments involve infinitely large representations.
I still think that the Jeffreyesque representational choice of using compact event-propositions, rather than fully-specified worlds, seems more plausible with respect to such bounded agents.
You defined U at the very beginning, so there is no need to send these new facts to U, it doesn’t care. Instead, you are describing a problem with P, and it’s a hard problem, but Jeffrey also uses P, so that doesn’t solve it.
As per my earlier comment on “Omega is merely the domain of U”, I think here you’re abandoning elements of the random-variable approach to U, and in fact reasoning in a more JB-esque way.
> … set our model to be a list of “events” we’ve observed … I didn’t understand this part.
If you “evaluate events”, then events have some sort of bit representation in the agent, right? I don’t clearly see the events in your “Updates Are Computable” example, so I can’t say much and I may be confused, but I have a strong feeling that you could define U as a function on those bits, and get the same agent.
Yeah, it seems like we’re talking past each other here and would need to do more work to unpack what’s going on. All I can think to say right now is this: the usual random-variable approach to defining U requires that probabilities respect countable additivity, because the event of “the button being pressed” is just the set of individual worlds where that happens (where the button gets pressed on a particular day). This is the root of the computational difficulty in the standard approach. JB doesn’t require countable additivity, since it isn’t a rule which agents can enforce on their beliefs by touching only finitely many of them. This harkens back to something you said earlier:
Instead, you are describing a problem with P, and it’s a hard problem, but Jeffrey also uses P, so that doesn’t solve it.
Which I agree with in this case, except that JB does “solve” it by explicitly relaxing that constraint.
Again, this is a way in which JB is more general, not less; JB could follow that constraint, if you like.
A classical probability distribution over Ω with a utility function understood as a random variable can easily be converted to the Jeffrey-Bolker framework, by taking the JB algebra as the sigma-algebra, and V as the expected value of U.
Ok, you’re saying that JB is just a set of axioms, and U already satisfies those axioms. And in this construction “event” really is a subset of Omega, and “updates” are just updates of P, right? Then of course U is not more general, I had the impression that JB is a more distinct and specific thing.
Regarding the other direction, my sense is that you will have a very hard time writing down these updates, and when it works, the code will look a lot like one with an utility function. But, again, the example in “Updates Are Computable” isn’t detailed enough for me to argue anything. Although now that I look at it, it does look a lot like the U(p)=1-p(“never press the button”).
events (ie, propositions in the agent’s internal language)
I think you should include this explanation of events in the post.
construct ‘worlds’ as maximal specifications of which propositions are true/false
It remains totally unclear to me why you demand the world to be such a thing.
I’m not sure why you say Omega can be the domain of U but not the entire ontology.
My point is that if U has two output values, then it only needs two possible inputs. Maybe you’re saying that if |dom(U)|=2, then there is no point in having |dom(P)|>2, and maybe you’re right, but I feel no need to make such claims. Even if the domains are different, they are not unrelated, Omega is still in some way contained in the ontology.
I agree that we can put even more stringent (and realistic) requirements on the computational power of the agent
We could and I think we should. I have no idea why we’re talking math, and not writing code for some toy agents in some toy simulation. Math has a tendency to sweep all kinds of infinite and intractable problems under the rug.
It remains totally unclear to me why you demand the world to be such a thing.
Ah, if you don’t see ‘worlds’ as meaning any such thing, then I wonder, are we really arguing about anything at all?
I’m using ‘worlds’ that way in reference to the same general setup which we see in propositions-vs-models in model theory, or in Ω vs the σ-algebra in the Kolmogorov axioms, or in Kripke frames, and perhaps some other places.
We can either start with a basic set of “worlds” (eg, Ω) and define our “propositions” or “events” as sets of worlds, where that proposition/event ‘holds’ or ‘is true’ or ‘occurs’; or, equivalently, we could start with an algebra of propositions/events (like a σ-algebra) and derive worlds as maximally specific choices of which propositions are true and false (or which events hold/occur).
My point is that if U has two output values, then it only needs two possible inputs. Maybe you’re saying that if |dom(U)|=2, then there is no point in having |dom(P)|>2, and maybe you’re right, but I feel no need to make such claims.
Maybe I should just let you tell me what framework you are even using in the first place. There are two main alternatives to the Jeffrey-Bolker framework which I have in mind: the Savage axioms, and also the thing commonly seen in statistics textbooks where you have a probability distribution which obeys the Kolmogorov axioms and then you have random variables over that (random variables being defined as functions of type Ω→R). A utility function is then treated as a random variable.
It doesn’t sound like your notion of utility function is any of those things, so I just don’t know what kind of framework you have in mind.
Maybe I should just let you tell me what framework you are even using in the first place.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
Furthermore, if O={o0,o1}, then I can group the terms into u(o0)P(“we’re in a state where f evaluates to o0”) + u(o1)P(“we’re in a state where f evaluates to o1″), I’m just moving all of the complexity out of EU and into P, which I assume to work by some magic (e.g. LI), that doesn’t involve literally iterating over every possible S.
We can either start with a basic set of “worlds” (eg, Ω) and define our “propositions” or “events” as sets of worlds <...>
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
By the way, I might not see any more replies to this.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
(Just to be clear, I did not write that article.)
I think the interpretation of Savage is pretty subtle. The objects of preference (“outcomes”) and objects of belief (“states”) are treated as distinct sets. But how are we supposed to think about this?
The interpretation Savage seems to imply is that both outcomes and states are “part of the world”, but the agent has somehow segregated parts of the world into matters of belief and matters of preference. But however the agent has done this, it seems to be fundamentally beyond the Savage representation; clearly within Savage, the agent cannot represent meta-beliefs about which matters are matters of belief and which are matters of preference. So this seems pretty weird.
We could instead think of the objects of preference as something like “happiness levels” rather than events in the world. The idea of the representation theorem then becomes that we can peg “happiness levels” to real numbers. In this case, the picture looks more like standard utility functions; S is the domain of the function that gives us our happiness level (which can be represented by a real-valued utility).
Another approach which seems somewhat common is to take the Savage representation but require that S=O. Savage’s “acts” then become maps from world to world, which fits well with other theories of counterfactuals and causal interventions.
So even within a Savage framework, it’s not entirely clear that we would want the domain of the utility function to be different from the domain of the belief function.
I should also have mentioned the super-common VNM picture, where utility has to be a function of arbitrary states as well.
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
The question is, what math-speak is the best representation of the things we actually care about?
I do agree that my post didn’t do a very good job of delivering a case against utility functions, and actually only argues that there exists a plausibly-more-useful alternative to a specific view which includes utility functions as one of several elements.
Utility functions definitely aren’t more general.
A classical probability distribution over Ω with a utility function understood as a random variable can easily be converted to the Jeffrey-Bolker framework, by taking the JB algebra as the sigma-algebra, and V as the expected value of U. Technically the sigma-algebra needs to be atomless to fit JB exactly, but Zoltan Domotor (Axiomatization of Jeffrey Utilities) generalizes this considerably.
I’ve heard people say that there is a way to convert in the other direction, but that it requires ultrafilters (so in some sense it’s very non-constructive). I haven’t been able to find this construction yet or had anyone explain how it works.
So it seems to me, but I recognize that I haven’t shown in detail, that the space of computable values is strictly broader in the JB framework; computable utility functions + computable probability gives us computable JB-values, but computable JB-values need not correspond to computable utility functions.
Thus, the space of minds which can be described by the two frameworks might be equivalent, but the space of minds which can be described by computations does not seem to be; the JB space, there, is larger.
Well, the Jeffrey-Bolker kind of explanation is as follows: agents really only need to consider and manipulate the probabilities and expected values of events (ie, propositions in the agent’s internal language). So it makes some sense to assume that these probabilities and expected values are computable. But this does not imply (as far as I know) that we can construct ‘worlds’ as maximal specifications of which propositions are true/false and then define a utility function on those worlds which is consistent with the computable expected values and have that utility function itself be computable. And indeed it seems rather plausible to me that this is not the case, even for values which otherwise seem relatively unremarkable, as illustrated by examples like the procrastination paradox.
I’m not sure why you say Omega can be the domain of U but not the entire ontology. This seems to mean that we don’t know how to take expected values for arbitrary events. Also it means you are no longer advocating for the model I’m arguing against, where U is a random variable.
I agree that we can put even more stringent (and realistic) requirements on the computational power of the agent, and then both JB and random-variable treatments become implausible, in so far as those treatments involve infinitely large representations.
I still think that the Jeffreyesque representational choice of using compact event-propositions, rather than fully-specified worlds, seems more plausible with respect to such bounded agents.
As per my earlier comment on “Omega is merely the domain of U”, I think here you’re abandoning elements of the random-variable approach to U, and in fact reasoning in a more JB-esque way.
Yeah, it seems like we’re talking past each other here and would need to do more work to unpack what’s going on. All I can think to say right now is this: the usual random-variable approach to defining U requires that probabilities respect countable additivity, because the event of “the button being pressed” is just the set of individual worlds where that happens (where the button gets pressed on a particular day). This is the root of the computational difficulty in the standard approach. JB doesn’t require countable additivity, since it isn’t a rule which agents can enforce on their beliefs by touching only finitely many of them. This harkens back to something you said earlier:
Which I agree with in this case, except that JB does “solve” it by explicitly relaxing that constraint.
Again, this is a way in which JB is more general, not less; JB could follow that constraint, if you like.
Ok, you’re saying that JB is just a set of axioms, and U already satisfies those axioms. And in this construction “event” really is a subset of Omega, and “updates” are just updates of P, right? Then of course U is not more general, I had the impression that JB is a more distinct and specific thing.
Regarding the other direction, my sense is that you will have a very hard time writing down these updates, and when it works, the code will look a lot like one with an utility function. But, again, the example in “Updates Are Computable” isn’t detailed enough for me to argue anything. Although now that I look at it, it does look a lot like the U(p)=1-p(“never press the button”).
I think you should include this explanation of events in the post.
It remains totally unclear to me why you demand the world to be such a thing.
My point is that if U has two output values, then it only needs two possible inputs. Maybe you’re saying that if |dom(U)|=2, then there is no point in having |dom(P)|>2, and maybe you’re right, but I feel no need to make such claims. Even if the domains are different, they are not unrelated, Omega is still in some way contained in the ontology.
We could and I think we should. I have no idea why we’re talking math, and not writing code for some toy agents in some toy simulation. Math has a tendency to sweep all kinds of infinite and intractable problems under the rug.
Ah, if you don’t see ‘worlds’ as meaning any such thing, then I wonder, are we really arguing about anything at all?
I’m using ‘worlds’ that way in reference to the same general setup which we see in propositions-vs-models in model theory, or in Ω vs the σ-algebra in the Kolmogorov axioms, or in Kripke frames, and perhaps some other places.
We can either start with a basic set of “worlds” (eg, Ω) and define our “propositions” or “events” as sets of worlds, where that proposition/event ‘holds’ or ‘is true’ or ‘occurs’; or, equivalently, we could start with an algebra of propositions/events (like a σ-algebra) and derive worlds as maximally specific choices of which propositions are true and false (or which events hold/occur).
Maybe I should just let you tell me what framework you are even using in the first place. There are two main alternatives to the Jeffrey-Bolker framework which I have in mind: the Savage axioms, and also the thing commonly seen in statistics textbooks where you have a probability distribution which obeys the Kolmogorov axioms and then you have random variables over that (random variables being defined as functions of type Ω→R). A utility function is then treated as a random variable.
It doesn’t sound like your notion of utility function is any of those things, so I just don’t know what kind of framework you have in mind.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
Furthermore, if O={o0,o1}, then I can group the terms into u(o0)P(“we’re in a state where f evaluates to o0”) + u(o1)P(“we’re in a state where f evaluates to o1″), I’m just moving all of the complexity out of EU and into P, which I assume to work by some magic (e.g. LI), that doesn’t involve literally iterating over every possible S.
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
By the way, I might not see any more replies to this.
(Just to be clear, I did not write that article.)
I think the interpretation of Savage is pretty subtle. The objects of preference (“outcomes”) and objects of belief (“states”) are treated as distinct sets. But how are we supposed to think about this?
The interpretation Savage seems to imply is that both outcomes and states are “part of the world”, but the agent has somehow segregated parts of the world into matters of belief and matters of preference. But however the agent has done this, it seems to be fundamentally beyond the Savage representation; clearly within Savage, the agent cannot represent meta-beliefs about which matters are matters of belief and which are matters of preference. So this seems pretty weird.
We could instead think of the objects of preference as something like “happiness levels” rather than events in the world. The idea of the representation theorem then becomes that we can peg “happiness levels” to real numbers. In this case, the picture looks more like standard utility functions; S is the domain of the function that gives us our happiness level (which can be represented by a real-valued utility).
Another approach which seems somewhat common is to take the Savage representation but require that S=O. Savage’s “acts” then become maps from world to world, which fits well with other theories of counterfactuals and causal interventions.
So even within a Savage framework, it’s not entirely clear that we would want the domain of the utility function to be different from the domain of the belief function.
I should also have mentioned the super-common VNM picture, where utility has to be a function of arbitrary states as well.
The question is, what math-speak is the best representation of the things we actually care about?