Interesting! Can you explain more about what this part means, I’m unfamiliar with the math of measurable functions, or the analogy to second-class citizenship.
More pathological infinite games (such as St. Petersburg with every other payout in the series reversed in sign) are excluded from the start, but without having to exclude them by any criterion involving utility. (Utility is constructed from the axioms, so cannot be mentioned within them.) Like measurable functions that have no integral, well, that’s just what they are. There’s no point in demanding that they all should.
Suppose I tell you that I am God and if you send me $1000, you’ll get to play a pathological St. Petersburg game of the sort you just described, with the payoffs being in money in your Divine Bank Account. (Did you know you have one? You do!). Do you assign 0 credence to this hypothesis, and to the set of all hypotheses in the vicinity? If not, … well, nothing really, since presumably your utility is not linear in money. But what if it was? Or do you agree that utility can’t be linear in money?
I think everyone agrees that utility is not linear in money, although there are different ideas about what the relationship is or should be. But utility is linear in itself, so one can consider all bets to be denominated in utilons or utiles. I haven’t seen an agreed currency symbol for utilons. Maybe one could use the symbol ウ (katakana for the sound “oo”).
I basically assign 0 credence to the supposed offer of this game, although that is not quite the way I would put it. Rather, games of this sort are excluded (at least, by me) from the purview of utility theory. It is outside the scope of the preference relation and is not assigned a utility.
I think it reasonable to do this, and the argument “yes, but what if?” an empty one, because, one can always say, “yes, but what if?” Yes, but what if God promised you $BIGNUM utiles for sawing your head off with a chainsaw? Yes, but what if mathematics is inconsistent, all the way down to propositional calculus? Yes, but what if all your arguments are wrong in a way you can’t see because some demon afflicts you? Yes, but what if you’re wrong? Then you’d be wrong! So you could be wrong!
So, despite the maxim that “0 and 1 are not probabilities”, at the meta-level, where the theory of probability and utility is constructed, I do as everyone does, and think in terms of ordinary logic, where everything has probability 0 or 1, and nothing in (0,1) is a truth value.
I think this is where we disagree. If you are going to exclude some possibilities, well, then the problem gets loads easier, doesn’t it? Imagine if I said “I’ve come up with a voting system which satisfies all of Arrow’s axioms, thus getting around his famous theorem” and then you qualified with “To make this work, I had to exclude certain scenarios from the purview of preference aggregation theory, namely, the ones that would make my system violate one of the axioms...”
Another way of putting it: Look, some people assign non-zero credence to these pathological scenarios. (I do, for example. As does anyone who takes “0 and 1 are not probabilities” seriously, I think.) These people also have preferences over these scenarios; they choose between them, you can ask them what they will prefer and they’ll answer, etc. So your system for taking someone’s beliefs and preferences and then spitting out a (possibly unbounded) utility function… either just says these people don’t have utility functions at all, or gives them utility functions constructed by ignoring some of their beliefs and preferences in favor of others. This seems bad to me.
Imagine if I said “I’ve come up with a voting system which satisfies all of Arrow’s axioms, thus getting around his famous theorem” and then you qualified with “To make this work, I had to exclude certain scenarios from the purview of preference aggregation theory, namely, the ones that would make my system violate one of the axioms...”
I am actually excluding less than Savage does, not more: models of my axioms include all models of his, and more. And since Savage at first did not know that his axioms implied bounded utility, that cannot have been a consideration in his design of them.
People may give preferences involving pathological scenarios, but clearly those preferences cannot satisfy Savage’s axioms (since his axioms rule them out, and even more strongly than mine do).
There is no free lunch here. You can have preferences about everything in the Tegmark level 7 universe (or however high the hierarchy goes—somewhere I saw it extended several levels beyond Tegmark himself), but at the cost of them failing to obey reasonable sounding properties of rational preference.
I think I agree with you that “Savage axioms imply bounded utility, so there” isn’t a strong argument. And the fact that you’ve found a set of axioms that don’t imply bounded utility makes it even weaker. My disagreement is with the claim that utility can/should be unbounded. I’m saying that making sense of various important kinds of scenarios/preferences requires (or at least, is best done via) bounded utility. You are saying those scenarios/preferences are unimportant to make sense of and we should ignore them. (And you are saying Savage agrees with you on this point). Right?
Also, I deny that bounded utility functions disobey reasonable-sounding properties of rational preference. For one thing, there are other axiom sets besides yours and Savage’s, ones which I like better anyway (e.g. Jeffrey-Bolker). For another… are you sure Savage’s axioms rule out the sorts of preferences I’m talking about? They don’t rule out bounded utility functions, after all. And so why would they rule out someone listening to the proposal, saying “Eh, it basically cancels out IMO; large amounts of money/debt don’t matter to me much” and refusing to pay up? (I am not super familiar with the savage axioms to be honest; maybe they do rule out this person’s preferences. If so, so much the worse for them, I say.)
Re Jeffrey-Bolker, the only system I studied in detail was Savage’s, but my impression is that the fix I applied to that system can be applied all the others that paint themselves into the corner of bounded utility, and with the same effect of removing that restriction. Do the Jeffrey-Bolker axioms either assume or imply bounded utility?
Having now read some expositions of the Jeffrey-Bolker theory, I can answer my own question.
The Jeffrey-Bolker axioms imply the finite utility of every prospect (to be technical, the Averaging axiom fails when there are infinite utilities), but the utility can be unbounded above and below. It cannot be infinite. In this it differs from Savage’s system.
For Savage’s axioms, unbounded utility implies the existence of gambles like St. Peterburg, of infinite utility, and all the rest of the menagerie of infinite games listed in this SEP article. From these a contradiction with Savage’s axioms can be found. Hence all models of Savage’s axioms have bounded utility.
In the Jeffrey-Bolker system, gambles cannot be constructed at will. The set of available gambles is built into the world that the agent faces. The agent is an observer: it cannot act upon the world, only have preferences about how the world is. None of the paradoxical games exist in a model of the Jeffrey-Bolker axioms. They do allow the existence of non-paradoxical infinite games, games such as Convergent St. Petersburg, which is St. Petersburg modified to have arithmetically instead of geometrically growing payouts. However, I note that one of Jeffrey’s verbal arguments against St. Petersburg — that no-one can offer the game because it requires them to be able to cover arbitrarily large payouts — applies equally to Convergent St. Petersburg.
are you sure Savage’s axioms rule out the sorts of preferences I’m talking about? They don’t rule out bounded utility functions, after all.
Savage’s axioms imply that utility is bounded. This is what Savage did not know when he formulated them, but Peter Fishburn proved it, and Savage included the result in the second edition of his book. So Savage accidentally brute-forced the pathological games out of existence. All acts, in Savage’s system, have a defined, finite expected value, and the St. Petersburg game and its variants do not exist. God himself cannot offer you these games. The utilities of the successive St. Peterburg payoffs are bounded, and cannot even increase linearly, although intuitively that version should have a well-defined, finite expected value.
In my approach, I proceed more cautiously by only considering “finite” acts at the outset: acts with only finitely many different consequences. Then I introduce acts with infinitely many consequences as limits of these, some of which have finite expected values and some infinite.
All acts, in Savage’s system, have a defined, finite expected value, and the St. Petersburg game and its variants do not exist.
The first clause does not imply the second. The St. Petersburg game variant in which the payoffs are utility does not exist, but the St. Petersburg game variant in which the payoffs are dollars does exist. (Or does something else in Savage’s framework rule it out?)
The St. Petersburg game variant in which the payoffs are dollars can only exist in Savage’s system if there is a limit on the number of utilons that any amount of dollars could buy. No more utility than that exists. But that game is not paradoxical. It has a finite expected value in utilons, and that is an upper bound on the fee it is worth paying to play the game.
In other words, the St. Petersburg game (dollars variant) can exist just fine in Savage’s system, it’s only the utility variant that can’t. Good. What about in your system? Can the dollars variant exist?
If the dollars variant can exist, what happens in your system when someone decides that their utility function is linear in dollars? Does your system (like Savage’s) say they can’t do that, that utility must be bounded in dollars at least?
With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don’t regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.
St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.
Both types are excluded by Savage’s axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.
Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.
I think maybe our disagreement has to do with what is unnatural. I don’t think it’s unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn’t have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars—which did seem unnatural. But maybe I’ve been misinterpreting you.
Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.
Overall I’m losing interest in this conversation, I’m afraid. I think we are talking past each other; I don’t think you get what I am trying to say, and probably I’m not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don’t understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I’m saying this now in case you want to just call it a day. ;)
What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that.
In fact I don’t buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage’s, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage’s, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.
No, actually they don’t. I’ve just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine’s “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.
So there is the situation. Savage’s axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.
I’m happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.
Interesting! Can you explain more about what this part means, I’m unfamiliar with the math of measurable functions, or the analogy to second-class citizenship.
Suppose I tell you that I am God and if you send me $1000, you’ll get to play a pathological St. Petersburg game of the sort you just described, with the payoffs being in money in your Divine Bank Account. (Did you know you have one? You do!). Do you assign 0 credence to this hypothesis, and to the set of all hypotheses in the vicinity? If not, … well, nothing really, since presumably your utility is not linear in money. But what if it was? Or do you agree that utility can’t be linear in money?
I think everyone agrees that utility is not linear in money, although there are different ideas about what the relationship is or should be. But utility is linear in itself, so one can consider all bets to be denominated in utilons or utiles. I haven’t seen an agreed currency symbol for utilons. Maybe one could use the symbol ウ (katakana for the sound “oo”).
I basically assign 0 credence to the supposed offer of this game, although that is not quite the way I would put it. Rather, games of this sort are excluded (at least, by me) from the purview of utility theory. It is outside the scope of the preference relation and is not assigned a utility.
I think it reasonable to do this, and the argument “yes, but what if?” an empty one, because, one can always say, “yes, but what if?” Yes, but what if God promised you $BIGNUM utiles for sawing your head off with a chainsaw? Yes, but what if mathematics is inconsistent, all the way down to propositional calculus? Yes, but what if all your arguments are wrong in a way you can’t see because some demon afflicts you? Yes, but what if you’re wrong? Then you’d be wrong! So you could be wrong!
So, despite the maxim that “0 and 1 are not probabilities”, at the meta-level, where the theory of probability and utility is constructed, I do as everyone does, and think in terms of ordinary logic, where everything has probability 0 or 1, and nothing in (0,1) is a truth value.
Thanks for the explanation!
I think this is where we disagree. If you are going to exclude some possibilities, well, then the problem gets loads easier, doesn’t it? Imagine if I said “I’ve come up with a voting system which satisfies all of Arrow’s axioms, thus getting around his famous theorem” and then you qualified with “To make this work, I had to exclude certain scenarios from the purview of preference aggregation theory, namely, the ones that would make my system violate one of the axioms...”
Another way of putting it: Look, some people assign non-zero credence to these pathological scenarios. (I do, for example. As does anyone who takes “0 and 1 are not probabilities” seriously, I think.) These people also have preferences over these scenarios; they choose between them, you can ask them what they will prefer and they’ll answer, etc. So your system for taking someone’s beliefs and preferences and then spitting out a (possibly unbounded) utility function… either just says these people don’t have utility functions at all, or gives them utility functions constructed by ignoring some of their beliefs and preferences in favor of others. This seems bad to me.
I am actually excluding less than Savage does, not more: models of my axioms include all models of his, and more. And since Savage at first did not know that his axioms implied bounded utility, that cannot have been a consideration in his design of them.
People may give preferences involving pathological scenarios, but clearly those preferences cannot satisfy Savage’s axioms (since his axioms rule them out, and even more strongly than mine do).
There is no free lunch here. You can have preferences about everything in the Tegmark level 7 universe (or however high the hierarchy goes—somewhere I saw it extended several levels beyond Tegmark himself), but at the cost of them failing to obey reasonable sounding properties of rational preference.
I think I agree with you that “Savage axioms imply bounded utility, so there” isn’t a strong argument. And the fact that you’ve found a set of axioms that don’t imply bounded utility makes it even weaker. My disagreement is with the claim that utility can/should be unbounded. I’m saying that making sense of various important kinds of scenarios/preferences requires (or at least, is best done via) bounded utility. You are saying those scenarios/preferences are unimportant to make sense of and we should ignore them. (And you are saying Savage agrees with you on this point). Right?
Also, I deny that bounded utility functions disobey reasonable-sounding properties of rational preference. For one thing, there are other axiom sets besides yours and Savage’s, ones which I like better anyway (e.g. Jeffrey-Bolker). For another… are you sure Savage’s axioms rule out the sorts of preferences I’m talking about? They don’t rule out bounded utility functions, after all. And so why would they rule out someone listening to the proposal, saying “Eh, it basically cancels out IMO; large amounts of money/debt don’t matter to me much” and refusing to pay up? (I am not super familiar with the savage axioms to be honest; maybe they do rule out this person’s preferences. If so, so much the worse for them, I say.)
Re Jeffrey-Bolker, the only system I studied in detail was Savage’s, but my impression is that the fix I applied to that system can be applied all the others that paint themselves into the corner of bounded utility, and with the same effect of removing that restriction. Do the Jeffrey-Bolker axioms either assume or imply bounded utility?
Having now read some expositions of the Jeffrey-Bolker theory, I can answer my own question.
The Jeffrey-Bolker axioms imply the finite utility of every prospect (to be technical, the Averaging axiom fails when there are infinite utilities), but the utility can be unbounded above and below. It cannot be infinite. In this it differs from Savage’s system.
For Savage’s axioms, unbounded utility implies the existence of gambles like St. Peterburg, of infinite utility, and all the rest of the menagerie of infinite games listed in this SEP article. From these a contradiction with Savage’s axioms can be found. Hence all models of Savage’s axioms have bounded utility.
In the Jeffrey-Bolker system, gambles cannot be constructed at will. The set of available gambles is built into the world that the agent faces. The agent is an observer: it cannot act upon the world, only have preferences about how the world is. None of the paradoxical games exist in a model of the Jeffrey-Bolker axioms. They do allow the existence of non-paradoxical infinite games, games such as Convergent St. Petersburg, which is St. Petersburg modified to have arithmetically instead of geometrically growing payouts. However, I note that one of Jeffrey’s verbal arguments against St. Petersburg — that no-one can offer the game because it requires them to be able to cover arbitrarily large payouts — applies equally to Convergent St. Petersburg.
Savage’s axioms imply that utility is bounded. This is what Savage did not know when he formulated them, but Peter Fishburn proved it, and Savage included the result in the second edition of his book. So Savage accidentally brute-forced the pathological games out of existence. All acts, in Savage’s system, have a defined, finite expected value, and the St. Petersburg game and its variants do not exist. God himself cannot offer you these games. The utilities of the successive St. Peterburg payoffs are bounded, and cannot even increase linearly, although intuitively that version should have a well-defined, finite expected value.
In my approach, I proceed more cautiously by only considering “finite” acts at the outset: acts with only finitely many different consequences. Then I introduce acts with infinitely many consequences as limits of these, some of which have finite expected values and some infinite.
The first clause does not imply the second. The St. Petersburg game variant in which the payoffs are utility does not exist, but the St. Petersburg game variant in which the payoffs are dollars does exist. (Or does something else in Savage’s framework rule it out?)
The St. Petersburg game variant in which the payoffs are dollars can only exist in Savage’s system if there is a limit on the number of utilons that any amount of dollars could buy. No more utility than that exists. But that game is not paradoxical. It has a finite expected value in utilons, and that is an upper bound on the fee it is worth paying to play the game.
In other words, the St. Petersburg game (dollars variant) can exist just fine in Savage’s system, it’s only the utility variant that can’t. Good. What about in your system? Can the dollars variant exist?
If the dollars variant can exist, what happens in your system when someone decides that their utility function is linear in dollars? Does your system (like Savage’s) say they can’t do that, that utility must be bounded in dollars at least?
With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don’t regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.
St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.
Both types are excluded by Savage’s axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.
Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.
I think maybe our disagreement has to do with what is unnatural. I don’t think it’s unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn’t have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars—which did seem unnatural. But maybe I’ve been misinterpreting you.
Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.
Overall I’m losing interest in this conversation, I’m afraid. I think we are talking past each other; I don’t think you get what I am trying to say, and probably I’m not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don’t understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I’m saying this now in case you want to just call it a day. ;)
In fact I don’t buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage’s, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage’s, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.
No, actually they don’t. I’ve just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine’s “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.
So there is the situation. Savage’s axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.
I’m happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.
Yeah, it seems like we are talking past each other. Thanks for engaging with me anyway.