So, I mean, yeah, you can make the problem go away by assuming bounded utility, but if you were trying to say something more than that, a bounded utility that is somehow “closer” to unbounded utility, then no such notion is meaningful.
Say our utility function assigns an actual thing in the universe with value V1 and the utility function is bounded by value X. What I’m saying is that we can make the problem go away by assuming bounded utility but without actually having to define the ratio between V1 and X as a specific finite number (this would not change upon scaling).
This means that, if your utility function is something like “number of happy human beings”, you don’t have to worry about your utility function breaking if the maximum number of happy human beings is larger than you expected since you never have to define such an expectation. See my sub-sub-reply to Eigil Rischel’s sub-reply for elaboration.
OK, so going by that you’re suggesting, like, introducing varying caps and then taking limits as the cap goes to infinity? It’s an interesting idea, but I don’t see why one would expect it to have anything to do with preferences.
I don’t see why one would expect it to have anything to do with preferences.
In my case, it’s a useful distinction because I’m the kind of person who thinks that showing that a real thing is infinite requires an infinite amount of information. This means I can say things like “my utility function scales upward linearly with the number of happy people” without things breaking because it is essentially impossible to convince me that any set of finite action could legitimately cause a literally infinite number of happy people to exist.
For people who believe they could achieve actually infinitely high values in their utility functions, the issues you point out still hold. But I think my utility function is bounded by something eventually even if I can’t tell you what that boundary actually is.
Apologies, but it sounds like you’ve gotten some things mixed up here? The issue is boundedness of utility functions, not whether they can take on infinity as a value. I don’t think anyone here is arguing that utility functions don’t need to be finite-valued. All the things you’re saying seem to be related to the latter question rather than the former, or you seem to be possibly conflating them?
In the second paragraph perhaps this is just an issue of language—when you say “infinitely high”, do you actually mean “aribtrarily high”? -- but in the first paragraph this does not seem to be the case.
I’m also not sure you understood the point of my question, so let me make it more explicit. Taking the idea of a utility function and modifying it as you describe is what I called “backwards reasoning” above—starting from the idea of a utility function, rather than starting from preferences. Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one’s preferences must be of this form?
Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens—so when I think about my preferences, I ask “what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?” whereas you ask something like “if my preferences need to correspond to a bounded utility function, what should they be?” [1]. As a result, I went on a tangent about infinity to begin exploring whether my modified notion of a utility function would break in ways that regular ones wouldn’t.
Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one’s preferences must be of this form?
I agree, one shouldn’t conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We’d need
an axiom describing what it means for one infinite wager to be “strictly better” than another.
an axiom describing what kinds of infinite wagers it is rational to be indifferent towards
Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function. If it didn’t, that’d be interesting. In any case, whatever happens will tell us more about either the structure our preferences should follow or the structure that our rationality-axioms should follow (if we cannot find a system).
Of course, maybe my modification of the idea of a utility function turns out to show such a decisioning-system exists by construction. In this case, modifying the idea of a utility function would help tell me that my preferences should follow the structure of that modification as well.
Does that address the question?
[1] From your post:
We should say instead, preferences are not up for grabs—utility functions merely encode these, remember. But if we’re stating idealized preferences (including a moral theory), then these idealized preferences had better be consistent—and not literally just consistent, but obeying rationality axioms to avoid stupid stuff. Which, as already discussed above, means they’ll correspond to a bounded utility function.
Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens—so when I think about my preferences, I ask “what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?” whereas you ask something like “if my preferences need to correspond to a bounded utility function, what should they be?” [1]
That doesn’t seem right. The whole point of what I’ve been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I’m confused by your claim that you’re asking about conditions, when you haven’t been talking about conditions, but rather ways of modifying the idea of decision-theoretic utility.
Something seems to be backwards here.
I agree, one shouldn’t conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We’d need
an axiom describing what it means for one infinite wager to be “strictly better” than another.
an axiom describing what kinds of infinite wagers it is rational to be indifferent towards
I’m confused here; it sounds like you’re just describing, in the VNM framework, the strong continuity requirement, or in Savage’s framework, P7? Of course Savage’s P7 doesn’t directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I’m less familiar with that.
Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function.
That doesn’t make sense. If you add axioms, you’ll only be able to conclude more things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!
Thanks for the reply. I re-read your post and your post on Savage’s proof and you’re right on all counts. For some reason, it didn’t actually click for me that P7 was introduced to address unbounded utility functions and boundedness was a consequence of taking the axioms to their logical conclusion.
Well, it’s worth noting that P7 is introduced to address gambles with infinitely many possible outcomes, regardless of whether those outcomes are bounded or not (which is the reason I argue above you can’t just get rid of it). But yeah. Glad that’s cleared up now! :)
Say our utility function assigns an actual thing in the universe with value V1 and the utility function is bounded by value X. What I’m saying is that we can make the problem go away by assuming bounded utility but without actually having to define the ratio between V1 and X as a specific finite number (this would not change upon scaling).
This means that, if your utility function is something like “number of happy human beings”, you don’t have to worry about your utility function breaking if the maximum number of happy human beings is larger than you expected since you never have to define such an expectation. See my sub-sub-reply to Eigil Rischel’s sub-reply for elaboration.
OK, so going by that you’re suggesting, like, introducing varying caps and then taking limits as the cap goes to infinity? It’s an interesting idea, but I don’t see why one would expect it to have anything to do with preferences.
Yes, I think that’s a good description.
In my case, it’s a useful distinction because I’m the kind of person who thinks that showing that a real thing is infinite requires an infinite amount of information. This means I can say things like “my utility function scales upward linearly with the number of happy people” without things breaking because it is essentially impossible to convince me that any set of finite action could legitimately cause a literally infinite number of happy people to exist.
For people who believe they could achieve actually infinitely high values in their utility functions, the issues you point out still hold. But I think my utility function is bounded by something eventually even if I can’t tell you what that boundary actually is.
Apologies, but it sounds like you’ve gotten some things mixed up here? The issue is boundedness of utility functions, not whether they can take on infinity as a value. I don’t think anyone here is arguing that utility functions don’t need to be finite-valued. All the things you’re saying seem to be related to the latter question rather than the former, or you seem to be possibly conflating them?
In the second paragraph perhaps this is just an issue of language—when you say “infinitely high”, do you actually mean “aribtrarily high”? -- but in the first paragraph this does not seem to be the case.
I’m also not sure you understood the point of my question, so let me make it more explicit. Taking the idea of a utility function and modifying it as you describe is what I called “backwards reasoning” above—starting from the idea of a utility function, rather than starting from preferences. Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one’s preferences must be of this form?
Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens—so when I think about my preferences, I ask “what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?” whereas you ask something like “if my preferences need to correspond to a bounded utility function, what should they be?” [1]. As a result, I went on a tangent about infinity to begin exploring whether my modified notion of a utility function would break in ways that regular ones wouldn’t.
I agree, one shouldn’t conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We’d need
an axiom describing what it means for one infinite wager to be “strictly better” than another.
an axiom describing what kinds of infinite wagers it is rational to be indifferent towards
Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function. If it didn’t, that’d be interesting. In any case, whatever happens will tell us more about either the structure our preferences should follow or the structure that our rationality-axioms should follow (if we cannot find a system).
Of course, maybe my modification of the idea of a utility function turns out to show such a decisioning-system exists by construction. In this case, modifying the idea of a utility function would help tell me that my preferences should follow the structure of that modification as well.
Does that address the question?
[1] From your post:
That doesn’t seem right. The whole point of what I’ve been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I’m confused by your claim that you’re asking about conditions, when you haven’t been talking about conditions, but rather ways of modifying the idea of decision-theoretic utility.
Something seems to be backwards here.
I’m confused here; it sounds like you’re just describing, in the VNM framework, the strong continuity requirement, or in Savage’s framework, P7? Of course Savage’s P7 doesn’t directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I’m less familiar with that.
That doesn’t make sense. If you add axioms, you’ll only be able to conclude more things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!
Thanks for the reply. I re-read your post and your post on Savage’s proof and you’re right on all counts. For some reason, it didn’t actually click for me that P7 was introduced to address unbounded utility functions and boundedness was a consequence of taking the axioms to their logical conclusion.
Well, it’s worth noting that P7 is introduced to address gambles with infinitely many possible outcomes, regardless of whether those outcomes are bounded or not (which is the reason I argue above you can’t just get rid of it). But yeah. Glad that’s cleared up now! :)