I mentioned this briefly in a footnote on the other post. The summary is that it’s not exactly clear to me what it means to have “unbounded utility functions” if you think there are only finitely many conceivable outcomes. Isn’t there then some best outcome, out of the 1030 that you think deserve non-zero probability?
Perhaps there could be infinitely many possible decisions, but that each decision involves only finitely many possible outcomes? But that seems implausible to me. For example, consider my parents making a decision about how to raise me—if there are infinitely many decisions I might face, then it seems like there are infinitely many possible outcomes from their decision. To me this seems worse than abstract worries about continuity.
And if there are infinitely many possible outcomes of a decision, what does it mean to force my beliefs to have finite support? If I just consider a single set of finitely-supported beliefs, what exactly am I doing? If I take limits, then as you point out we can end up back at the same paradox.
I guess the out here would be to represent outcomes as sequences of finitely supported probability distributions, effectively adding additional structure (that is presumably related to how that distribution came about). That means that I don’t need to be indifferent between two sequences with the same limit, I can care about that extra data.
This is the kind of thing I have in mind by abandoning probability theory and representing my uncertainty with some richer structure. I don’t find “sequence of finitely-supported probability distributions” particularly compelling but it seems like something you could try (and if you did it that way maybe you wouldn’t have to give up on probability theory, though as I suggested I suspect that’s where this road will end).
I guess the two questions, for that and any other proposal, would be: (i) where does this extra structure come from? what about my epistemic state determines how it gets represented as a sequence? (ii) are there any sensible preferences over the new enlarged space?
(I will probably make some posts in the future with more concrete examples of how totally messed up the “intuitive” unbounded utility functions are, which will hopefully make those concerns sharper.)
The way I was envisioning it, there would be infinitely possible outcomes but you could only have a belief about finitely many of them at one time.
I don’t think this is too outrageous—for example, if there were uncountably many possible outcomes then we all agree that (no matter the setup) there would be unmeasurable sets that you could not have a belief over.
The main motivation here is just that this is a mathematically nice way to set it up. For example, if the set of all possible outcomes is A, then conv(A) (the convex hull of A) will be the set of all finite-support probability distributions over A—it comes up naturally.
[More formal version: identify the set A as a subset of the vector space RA of functions from A to R, where each element x of A is identified with the characteristic function that returns 1 on input x and 0 otherwise. Then the convex hull of A can be defined as the intersection of all convex supersets of A (all of which are subsets of the vector space RA). It is then a relatively straight-forward theorem that this convex hull of A happens to be exactly the set of all functions A→R such that (1) they return 0 on all but finitely many elements, (2) they have non-negative range, and (3) the sum of their non-zero outputs is exactly 1; in other words, the convex hull of A is exactly the set of finite-support probability distributions over the set A. My point here is merely that finite-support probability distributions came up naturally, even in the context of an infinite outcome space A, and even though the definition of convex hull did not explicitly mention finite supports in any way.]
Upon reflection, I agree that sequences of such finite-support distributions are a kind of an ugly hack. In particular, it’s not clear how to mix together two such sequences (i.e. how to take a convex combination of them, something we may want to do with our beliefs).
We can just stick to finite-support distributions themselves, without allowing sequences of them. (Perhaps a motivation could be that our finite brains can only think about finitely many plausible outputs at a time, or something like that). In that case, I think the main drawback is only that we cannot model St. Petersburg paradox. However, given your counterexamples, perhaps this is a feature rather than a bug...
I guess I’m confused about how to represent my current beliefs with a finitely-supported probability distribution. It looks to me like there are infinitely many ways the universe could be (in the sense that e.g. I could start listing them and never stop, or that there are functions f:universes→universes for which f(U) is bigger than U while still being plausible).
I don’t expect to enumerate all these infinitely many universes, but practically how am I supposed to think about my preferences if it feels like there are clearly infinitely many possible states of affairs?
Your comment gave me pause, and certainly makes me lean away from finite-support probability distributions somewhat.
However, if the problem is that you can actively generate more and more plausible universes without stop, then it does seem at some level like your belief structure is a sequence of finite-support probability distributions, doesn’t it? As you mentally generate more and more plausible universes, your belief gets updates to a distribution with larger and larger support. The main problem is just that “sequence of distributions” is a much uglier mathematical object than a single distribution.
Another thought: if you can actively mentally generate more and more possible universes, and if, in addition, the universes you generate have such large utilities that they become “more and more important” to consider (i.e. even after multiplying by their diminishing probabilities, the absolute value of probability*utility is increasing), then you are screwed. This was shown nicely by your examples. So in some sense, we have to restrict to situations where the possible universes you mentally generate are diminishing in importance (i.e. even if their utility is increasing, their probability is diminishing fast enough to make the sequence absolutely convergent).
I mentioned this briefly in a footnote on the other post. The summary is that it’s not exactly clear to me what it means to have “unbounded utility functions” if you think there are only finitely many conceivable outcomes. Isn’t there then some best outcome, out of the 1030 that you think deserve non-zero probability?
Perhaps there could be infinitely many possible decisions, but that each decision involves only finitely many possible outcomes? But that seems implausible to me. For example, consider my parents making a decision about how to raise me—if there are infinitely many decisions I might face, then it seems like there are infinitely many possible outcomes from their decision. To me this seems worse than abstract worries about continuity.
And if there are infinitely many possible outcomes of a decision, what does it mean to force my beliefs to have finite support? If I just consider a single set of finitely-supported beliefs, what exactly am I doing? If I take limits, then as you point out we can end up back at the same paradox.
I guess the out here would be to represent outcomes as sequences of finitely supported probability distributions, effectively adding additional structure (that is presumably related to how that distribution came about). That means that I don’t need to be indifferent between two sequences with the same limit, I can care about that extra data.
This is the kind of thing I have in mind by abandoning probability theory and representing my uncertainty with some richer structure. I don’t find “sequence of finitely-supported probability distributions” particularly compelling but it seems like something you could try (and if you did it that way maybe you wouldn’t have to give up on probability theory, though as I suggested I suspect that’s where this road will end).
I guess the two questions, for that and any other proposal, would be: (i) where does this extra structure come from? what about my epistemic state determines how it gets represented as a sequence? (ii) are there any sensible preferences over the new enlarged space?
(I will probably make some posts in the future with more concrete examples of how totally messed up the “intuitive” unbounded utility functions are, which will hopefully make those concerns sharper.)
The way I was envisioning it, there would be infinitely possible outcomes but you could only have a belief about finitely many of them at one time.
I don’t think this is too outrageous—for example, if there were uncountably many possible outcomes then we all agree that (no matter the setup) there would be unmeasurable sets that you could not have a belief over.
The main motivation here is just that this is a mathematically nice way to set it up. For example, if the set of all possible outcomes is A, then conv(A) (the convex hull of A) will be the set of all finite-support probability distributions over A—it comes up naturally.
[More formal version: identify the set A as a subset of the vector space RA of functions from A to R, where each element x of A is identified with the characteristic function that returns 1 on input x and 0 otherwise. Then the convex hull of A can be defined as the intersection of all convex supersets of A (all of which are subsets of the vector space RA). It is then a relatively straight-forward theorem that this convex hull of A happens to be exactly the set of all functions A→R such that (1) they return 0 on all but finitely many elements, (2) they have non-negative range, and (3) the sum of their non-zero outputs is exactly 1; in other words, the convex hull of A is exactly the set of finite-support probability distributions over the set A. My point here is merely that finite-support probability distributions came up naturally, even in the context of an infinite outcome space A, and even though the definition of convex hull did not explicitly mention finite supports in any way.]
Upon reflection, I agree that sequences of such finite-support distributions are a kind of an ugly hack. In particular, it’s not clear how to mix together two such sequences (i.e. how to take a convex combination of them, something we may want to do with our beliefs).
We can just stick to finite-support distributions themselves, without allowing sequences of them. (Perhaps a motivation could be that our finite brains can only think about finitely many plausible outputs at a time, or something like that). In that case, I think the main drawback is only that we cannot model St. Petersburg paradox. However, given your counterexamples, perhaps this is a feature rather than a bug...
I guess I’m confused about how to represent my current beliefs with a finitely-supported probability distribution. It looks to me like there are infinitely many ways the universe could be (in the sense that e.g. I could start listing them and never stop, or that there are functions f:universes→universes for which f(U) is bigger than U while still being plausible).
I don’t expect to enumerate all these infinitely many universes, but practically how am I supposed to think about my preferences if it feels like there are clearly infinitely many possible states of affairs?
Your comment gave me pause, and certainly makes me lean away from finite-support probability distributions somewhat.
However, if the problem is that you can actively generate more and more plausible universes without stop, then it does seem at some level like your belief structure is a sequence of finite-support probability distributions, doesn’t it? As you mentally generate more and more plausible universes, your belief gets updates to a distribution with larger and larger support. The main problem is just that “sequence of distributions” is a much uglier mathematical object than a single distribution.
Another thought: if you can actively mentally generate more and more possible universes, and if, in addition, the universes you generate have such large utilities that they become “more and more important” to consider (i.e. even after multiplying by their diminishing probabilities, the absolute value of probability*utility is increasing), then you are screwed. This was shown nicely by your examples. So in some sense, we have to restrict to situations where the possible universes you mentally generate are diminishing in importance (i.e. even if their utility is increasing, their probability is diminishing fast enough to make the sequence absolutely convergent).