What are you referring to as the generalised theorem?
Try this:
Theorem: Using the notation from here, except we will allow lotteries to have infinitely many outcomes as long as the probabilities sum to 1.
If an ordering satisfies the four axioms of completeness, transitivity, continuity, and independence, and the following additional axiom:
Axiom (5): Let L = Sum(i=0...infinity, p_i M_i) with Sum(i=0...infinity, p_i)=1 and N >= Sum(i=0...n, p_i M_i)/Sum(i=0...n, p_i) then N >= L. And similarly with the arrows reversed.
An agent satisfying axioms (1)-(5) has preferences given by a bounded utility function u such that, L>M iff Eu(L)>Eu(M).
Axiom (5): Let L = Sum(i=0...infinity, pi Mi) with Sum(i=0...infinity, pi)=1 and N >= Sum(i=0...n, pi Mi)/Sum(i=0...n, pi) then N >= L. And similarly with the arrows reversed.
That appears to be an axiom that probabilities go to zero enough faster than utilities that total utility converges (in a setting in which the sure outcomes are a countable set). It lacks something in precision of formulation (e.g. what is being quantified over, and in what order?) but it is fairly clear what it is doing. There’s nothing like it in VNM’s book or the Wiki article, though. Where does it come from?
Yes, in the same way that VNM’s axioms are just what is needed to get affine utilities, an axiom something like this will give you bounded utilities. Does the axiom have any intuitive appeal, separate from it providing that consequence? If not, the axiom does not provide a justification for bounded utilities, just an indirect way of getting them, and you might just as well add an axiom saying straight out that utilities are bounded.
None of which solves the problem that entirelyuseless cited. The above axiom forbids the Solomonoff prior (for which pi Mi grows with busy beaver fastness), but does not suggest any replacement universal prior.
That appears to be an axiom that probabilities go to zero enough faster than utilities that total utility converges (in a setting in which the sure outcomes are a countable set).
No, the axiom doesn’t put any constraints on the probability distribution. It merely constrains preferences, specifically it says that preferences for infinite lotteries should be the ‘limits’ of the preference for finite lotteries. One can think of it as a slightly stronger version of the following:
Axiom (5′): Let L = Sum(i=0...infinity, p_i M_i) with Sum(i=0...infinity, p_i)=1. Then if for all i N>=M_I then N>=L. And similarly with the arrows reversed. (In other words if N is preferred over every element of a lottery then N is preferred over the lottery.)
In fact, I’m pretty sure that axiom (5′) is strong enough, but I haven’t worked out all the details.
It lacks something in precision of formulation (e.g. what is being quantified over, and in what order?)
Sorry, there were some formatting problems, hopefully it’s better now.
(for which p_i M_i [formatting fixed] grows with busy beaver fastness)
The M_i’s are lotteries that the agent has preferences over, not utility values. Thus it doesn’t a priori make sense to talk about its growth rate.
I think I understand what the axiom is doing. I’m not sure it’s strong enough, though. There is no guarantee that there is any N that is >= M_i for all i (or for all large enough i, a weaker version which I think is what is needed), nor an N that is ⇐ them. But suppose there are such an upper Nu and a lower Nl, thus giving a continuous range between them of Np = p Nl + (1-p) Nu for all p in 0..1. There is no guarantee that the supremum of those p for which Np is a lower bound is equal to the infimum of those for which it is an upper bound. The axiom needs to stipulate that lower and upper bounds Nl and Nu exist, and that there is no gap in the behaviours of the family Np.
One also needs some axioms to the effect that a formal infinite sum Sum{i>=0: pi Mi} actually behaves like one, otherwise “Sum” is just a suggestively named but uninterpreted symbol. Such axioms might be invariance under permutation, equivalence to a finite weighted average when only finitely many pi are nonzero, and distribution of the mixture process to the components for infinite lotteries having the same sequence of component lotteries. I’m not sure that this is yet strong enough.
The task these axioms have to perform is to uniquely extend the preference relation from finite lotteries to infinite lotteries. It may be possible to do that, but having thought for a while and not come up with a suitable set of axioms, I looked for a counterexample.
Consider the situation in which there is exactly one sure-thing lottery M. The infinite lotteries, with the axioms I suggested in the second paragraph, can be identified with the probability distributions over the non-negative integers, and they are equivalent when they are permutations of each other. All of the distributions with finite support (call these the finite lotteries) are equivalent to M, and must be assigned the same utility, call it u. Take any distribution with infinite support, and assign it an arbitrary utility v. This determines the utility of all lotteries that are weighted averages of that one with M. But that won’t cover all lotteries yet. Take another one and give it an arbitrary utility w. This determines the utility of some more lotteries. And so on. I don’t think any inconsistency is going to arise. This allows for infinitely many different preference orderings, and hence infinitely many different utility functions.
The construction is somewhat analogous to constructing an additive function from reals to reals, i.e. one satisfying f(a+b) = f(a) + f(b). The only continuous additive functions are multiplication by a constant, but there are infinitely many non-continuous additive functions.
An alternative approach would be to first take any preference ordering consistent with the axioms, then use the VNM axioms to construct a utility function for that preference ordering, and then to impose an axiom about the behaviour of that utility function, because once we have utilities it’s easy to talk about limits. The most straightforward such axiom would be to stipulate that U( Sum{i>=0: pi Mi} ) = Sum{i>=0: pi U(Mi)}, where the sum on the right hand side is an ordinary infinite sum of real numbers. The axiom would require this to converge.
This axiom has the immediate consequence that utilities are bounded, for if they were not, then for any probability distribution {i>=0: pi} with infinite support, one could choose a sequence of lotteries whose utilities grew fast enough that Sum{i>=0: pi U(Mi)} would fail to converge.
Personally, I am not convinced that bounded utility is the way to go to avoid Pascal’s Mugging, because I see no principled way to choose the bound. The larger you make it, the more Muggings you are vulnerable to, but the smaller you make it, the more low-hanging fruit you will ignore: substantial chances of stupendous rewards.
In one of Eliezer’s talks, he makes a point about how bad an existential risk to humanity is. It must be measured not by the number of people who die in it when it happens, but the loss of a potentially enormous future of humanity spreading to the stars. That is the real difference between “only” 1 billion of us dying, and all 7 billion. If you are moved by this argument, you must see a substantial gap between the welfare of 7 billion people and that of however many 10^n you foresee if we avoid these risks. That already gives substantial headroom for Muggings.
I think I understand what the axiom is doing. I’m not sure it’s strong enough, though. There is no guarantee that there is any N that is >= M_i for all i (or for all large enough i, a weaker version which I think is what is needed), nor an N that is ⇐ them.
The M_i’s can themselves be lotteries. The idea is to group events into finite lotteries so that the M_i’s are >= N.
Personally, I am not convinced that bounded utility is the way to go to avoid Pascal’s Mugging, because I see no principled way to choose the bound.
There is no principled way to chose utility functions either, yet people seem to be fine with them.
My point is that if one takes the VNM theory seriously as justification for having a utility function, the same logic means it must be bounded.
There is no principled way to chose utility functions either, yet people seem to be fine with them.
The VNM axioms are the principled way. That’s not to say that it’s a way I agree with, but it is a principled way. The axioms are the principles, codifying an idea of what it means for a set of preferences to be rational. Preferences are assumed given, not chosen.
My point is that if one takes the VNM theory seriously as justification for having a utility function, the same logic means it must be bounded.
Boundedness does not follow from the VNM axioms. It follows from VNM plus an additional construction of infinite lotteries, plus additional axioms about infinite lotteries such as those we have been discussing. Basically, if utilities are unbounded, then there are St. Petersburg-style infinite lotteries with divergent utilities; if all infinite lotteries are required to have defined utilities, then utilities are bounded.
This is indeed a problem. Either utilities are bounded, or some infinite lotteries have no defined value. When probabilities are given by algorithmic probability, the situation is even worse: if utilities are unbounded then no expected utiilties are defined.
But the problem is not solved by saying, “utilities must be bounded then”. Perhaps utilities must be bounded. Perhaps Solomonoff induction is the wrong way to go. Perhaps infinite lotteries should be excluded. (Finitists would go for that one.) Perhaps some more fundamental change to the conceptual structure of rational expectations in the face of uncertainty is called for.
They show that you must have a utility function, not what it should be.
Boundedness does not follow from the VNM axioms. It follows from VNM plus an additional construction of infinite lotteries, plus additional axioms about infinite lotteries such as those we have been discussing.
Well the additional axiom is as intuitive as the VNM ones, and you need infinite lotteries if you are too model a world with infinite possibilities.
Perhaps Solomonoff induction is the wrong way to go.
This amounts to rejecting completeness. Suppose omega offered to create a universe based on a Solomonoff prior, you’d have to way to evaluate this proposal.
They show that you must have a utility function, not what it should be.
Given your preferences, they do show what your utility function should be (up to affine transformation).
Well the additional axiom is as intuitive as the VNM ones, and you need infinite lotteries if you are too model a world with infinite possibilities.
You need some, but not all of them.
This amounts to rejecting completeness.
By completeness I assume you mean assigning a finite utility to every lottery, including the infinite ones. Why not reject completeness? The St. Petersburg lottery is plainly one that cannot exist. I therefore see no need to assign it any utility.
Bounded utility does not solve Pascal’s Mugging, it merely offers an uneasy compromise between being mugged by remote promises of large payoffs and passing up unremote possibilities of large payoffs.
Suppose omega offered to create a universe based on a Solomonoff prior, you’d have to way to evaluate this proposal.
I don’t care. This is a question I see no need to have any answer to. But why invoke Omega? The Solomonoff prior is already put forward by some as a universal prior, and it is already known to have problems with unbounded utility. As far as I know this problem is still unsolved.
Actually, I would, but that’s digressing from the subject of infinite lotteries. As I have been pointing out, infinite lotteries are outside the scope of the VNM axioms and need additional axioms to be defined. It seems no more reasonable to me to require completeness of the preference ordering over St. Petersburg lotteries than to require that all sequences of real numbers converge.
Care to assign a probability to that statement.
“True.” At some point, probability always becomes subordinate to logic, which knows only 0 and 1. If you can come up with a system in which it’s probabilities all the way down, write it up for a mathematics journal.
If you’re going to cite this (which makes a valid point, but people usually repeat the password in place of understanding the idea), tell me what probability you assign to A conditional on A, to 1+1=2, and to an omnipotent God being able to make a weight so heavy he can’t lift it.
“True.” At some point, probability always becomes subordinate to logic, which knows only 0 and 1. If you can come up with a system in which it’s probabilities all the way down, write it up for a mathematics journal.
Ok, so care to present an a priori pure logic argument for why St. Petersburg lottery-like situations can’t exist.
Ok, so care to present an a priori pure logic argument for why St. Petersburg lottery-like situations can’t exist.
FInite approximations to the St. Petersburg lottery have unbounded values. The sequence does not converge to a limit.
In contrast, a sequence of individual gambles with expectations 1, 1⁄2, 1⁄4, etc. does have a limit, and it is reasonable to allow the idealised infinite sequence of them a place in the set of lotteries.
You might as well ask why the sum of an infinite number of ones doesn’t exist. There are ways of extending the real numbers with various sorts of infinite numbers, but they are extensions. The real numbers do not include them. The difficulty of devising an extension that allows for the convergence of all infinite sums is not an argument that the real numbers should be bounded.
Try this:
Theorem: Using the notation from here, except we will allow lotteries to have infinitely many outcomes as long as the probabilities sum to 1.
If an ordering satisfies the four axioms of completeness, transitivity, continuity, and independence, and the following additional axiom:
Axiom (5): Let L = Sum(i=0...infinity, p_i M_i) with Sum(i=0...infinity, p_i)=1 and N >= Sum(i=0...n, p_i M_i)/Sum(i=0...n, p_i) then N >= L. And similarly with the arrows reversed.
An agent satisfying axioms (1)-(5) has preferences given by a bounded utility function u such that, L>M iff Eu(L)>Eu(M).
Edit: fixed formatting.
That appears to be an axiom that probabilities go to zero enough faster than utilities that total utility converges (in a setting in which the sure outcomes are a countable set). It lacks something in precision of formulation (e.g. what is being quantified over, and in what order?) but it is fairly clear what it is doing. There’s nothing like it in VNM’s book or the Wiki article, though. Where does it come from?
Yes, in the same way that VNM’s axioms are just what is needed to get affine utilities, an axiom something like this will give you bounded utilities. Does the axiom have any intuitive appeal, separate from it providing that consequence? If not, the axiom does not provide a justification for bounded utilities, just an indirect way of getting them, and you might just as well add an axiom saying straight out that utilities are bounded.
None of which solves the problem that entirelyuseless cited. The above axiom forbids the Solomonoff prior (for which pi Mi grows with busy beaver fastness), but does not suggest any replacement universal prior.
No, the axiom doesn’t put any constraints on the probability distribution. It merely constrains preferences, specifically it says that preferences for infinite lotteries should be the ‘limits’ of the preference for finite lotteries. One can think of it as a slightly stronger version of the following:
Axiom (5′): Let L = Sum(i=0...infinity, p_i M_i) with Sum(i=0...infinity, p_i)=1. Then if for all i N>=M_I then N>=L. And similarly with the arrows reversed. (In other words if N is preferred over every element of a lottery then N is preferred over the lottery.)
In fact, I’m pretty sure that axiom (5′) is strong enough, but I haven’t worked out all the details.
Sorry, there were some formatting problems, hopefully it’s better now.
The M_i’s are lotteries that the agent has preferences over, not utility values. Thus it doesn’t a priori make sense to talk about its growth rate.
I think I understand what the axiom is doing. I’m not sure it’s strong enough, though. There is no guarantee that there is any N that is >= M_i for all i (or for all large enough i, a weaker version which I think is what is needed), nor an N that is ⇐ them. But suppose there are such an upper Nu and a lower Nl, thus giving a continuous range between them of Np = p Nl + (1-p) Nu for all p in 0..1. There is no guarantee that the supremum of those p for which Np is a lower bound is equal to the infimum of those for which it is an upper bound. The axiom needs to stipulate that lower and upper bounds Nl and Nu exist, and that there is no gap in the behaviours of the family Np.
One also needs some axioms to the effect that a formal infinite sum Sum{i>=0: pi Mi} actually behaves like one, otherwise “Sum” is just a suggestively named but uninterpreted symbol. Such axioms might be invariance under permutation, equivalence to a finite weighted average when only finitely many pi are nonzero, and distribution of the mixture process to the components for infinite lotteries having the same sequence of component lotteries. I’m not sure that this is yet strong enough.
The task these axioms have to perform is to uniquely extend the preference relation from finite lotteries to infinite lotteries. It may be possible to do that, but having thought for a while and not come up with a suitable set of axioms, I looked for a counterexample.
Consider the situation in which there is exactly one sure-thing lottery M. The infinite lotteries, with the axioms I suggested in the second paragraph, can be identified with the probability distributions over the non-negative integers, and they are equivalent when they are permutations of each other. All of the distributions with finite support (call these the finite lotteries) are equivalent to M, and must be assigned the same utility, call it u. Take any distribution with infinite support, and assign it an arbitrary utility v. This determines the utility of all lotteries that are weighted averages of that one with M. But that won’t cover all lotteries yet. Take another one and give it an arbitrary utility w. This determines the utility of some more lotteries. And so on. I don’t think any inconsistency is going to arise. This allows for infinitely many different preference orderings, and hence infinitely many different utility functions.
The construction is somewhat analogous to constructing an additive function from reals to reals, i.e. one satisfying f(a+b) = f(a) + f(b). The only continuous additive functions are multiplication by a constant, but there are infinitely many non-continuous additive functions.
An alternative approach would be to first take any preference ordering consistent with the axioms, then use the VNM axioms to construct a utility function for that preference ordering, and then to impose an axiom about the behaviour of that utility function, because once we have utilities it’s easy to talk about limits. The most straightforward such axiom would be to stipulate that U( Sum{i>=0: pi Mi} ) = Sum{i>=0: pi U(Mi)}, where the sum on the right hand side is an ordinary infinite sum of real numbers. The axiom would require this to converge.
This axiom has the immediate consequence that utilities are bounded, for if they were not, then for any probability distribution {i>=0: pi} with infinite support, one could choose a sequence of lotteries whose utilities grew fast enough that Sum{i>=0: pi U(Mi)} would fail to converge.
Personally, I am not convinced that bounded utility is the way to go to avoid Pascal’s Mugging, because I see no principled way to choose the bound. The larger you make it, the more Muggings you are vulnerable to, but the smaller you make it, the more low-hanging fruit you will ignore: substantial chances of stupendous rewards.
In one of Eliezer’s talks, he makes a point about how bad an existential risk to humanity is. It must be measured not by the number of people who die in it when it happens, but the loss of a potentially enormous future of humanity spreading to the stars. That is the real difference between “only” 1 billion of us dying, and all 7 billion. If you are moved by this argument, you must see a substantial gap between the welfare of 7 billion people and that of however many 10^n you foresee if we avoid these risks. That already gives substantial headroom for Muggings.
The M_i’s can themselves be lotteries. The idea is to group events into finite lotteries so that the M_i’s are >= N.
There is no principled way to chose utility functions either, yet people seem to be fine with them.
My point is that if one takes the VNM theory seriously as justification for having a utility function, the same logic means it must be bounded.
The VNM axioms are the principled way. That’s not to say that it’s a way I agree with, but it is a principled way. The axioms are the principles, codifying an idea of what it means for a set of preferences to be rational. Preferences are assumed given, not chosen.
Boundedness does not follow from the VNM axioms. It follows from VNM plus an additional construction of infinite lotteries, plus additional axioms about infinite lotteries such as those we have been discussing. Basically, if utilities are unbounded, then there are St. Petersburg-style infinite lotteries with divergent utilities; if all infinite lotteries are required to have defined utilities, then utilities are bounded.
This is indeed a problem. Either utilities are bounded, or some infinite lotteries have no defined value. When probabilities are given by algorithmic probability, the situation is even worse: if utilities are unbounded then no expected utiilties are defined.
But the problem is not solved by saying, “utilities must be bounded then”. Perhaps utilities must be bounded. Perhaps Solomonoff induction is the wrong way to go. Perhaps infinite lotteries should be excluded. (Finitists would go for that one.) Perhaps some more fundamental change to the conceptual structure of rational expectations in the face of uncertainty is called for.
They show that you must have a utility function, not what it should be.
Well the additional axiom is as intuitive as the VNM ones, and you need infinite lotteries if you are too model a world with infinite possibilities.
This amounts to rejecting completeness. Suppose omega offered to create a universe based on a Solomonoff prior, you’d have to way to evaluate this proposal.
Given your preferences, they do show what your utility function should be (up to affine transformation).
You need some, but not all of them.
By completeness I assume you mean assigning a finite utility to every lottery, including the infinite ones. Why not reject completeness? The St. Petersburg lottery is plainly one that cannot exist. I therefore see no need to assign it any utility.
Bounded utility does not solve Pascal’s Mugging, it merely offers an uneasy compromise between being mugged by remote promises of large payoffs and passing up unremote possibilities of large payoffs.
I don’t care. This is a question I see no need to have any answer to. But why invoke Omega? The Solomonoff prior is already put forward by some as a universal prior, and it is already known to have problems with unbounded utility. As far as I know this problem is still unsolved.
Assuming your preferences satisfy the axioms.
No, by completeness I mean that for any two lotteries you prefer one over the other.
So why not reject it in the finite case as well?
Care to assign a probability to that statement.
Actually, I would, but that’s digressing from the subject of infinite lotteries. As I have been pointing out, infinite lotteries are outside the scope of the VNM axioms and need additional axioms to be defined. It seems no more reasonable to me to require completeness of the preference ordering over St. Petersburg lotteries than to require that all sequences of real numbers converge.
“True.” At some point, probability always becomes subordinate to logic, which knows only 0 and 1. If you can come up with a system in which it’s probabilities all the way down, write it up for a mathematics journal.
If you’re going to cite this (which makes a valid point, but people usually repeat the password in place of understanding the idea), tell me what probability you assign to A conditional on A, to 1+1=2, and to an omnipotent God being able to make a weight so heavy he can’t lift it.
Ok, so care to present an a priori pure logic argument for why St. Petersburg lottery-like situations can’t exist.
FInite approximations to the St. Petersburg lottery have unbounded values. The sequence does not converge to a limit.
In contrast, a sequence of individual gambles with expectations 1, 1⁄2, 1⁄4, etc. does have a limit, and it is reasonable to allow the idealised infinite sequence of them a place in the set of lotteries.
You might as well ask why the sum of an infinite number of ones doesn’t exist. There are ways of extending the real numbers with various sorts of infinite numbers, but they are extensions. The real numbers do not include them. The difficulty of devising an extension that allows for the convergence of all infinite sums is not an argument that the real numbers should be bounded.
They have unbounded expected values, that doesn’t mean the St. Petersburg lottery can’t exist, only that its expected value doesn’t.