With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don’t regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.
St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.
Both types are excluded by Savage’s axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.
Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.
I think maybe our disagreement has to do with what is unnatural. I don’t think it’s unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn’t have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars—which did seem unnatural. But maybe I’ve been misinterpreting you.
Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.
Overall I’m losing interest in this conversation, I’m afraid. I think we are talking past each other; I don’t think you get what I am trying to say, and probably I’m not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don’t understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I’m saying this now in case you want to just call it a day. ;)
What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that.
In fact I don’t buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage’s, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage’s, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.
No, actually they don’t. I’ve just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine’s “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.
So there is the situation. Savage’s axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.
I’m happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.
With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don’t regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.
St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.
Both types are excluded by Savage’s axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.
Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.
I think maybe our disagreement has to do with what is unnatural. I don’t think it’s unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn’t have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars—which did seem unnatural. But maybe I’ve been misinterpreting you.
Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.
Overall I’m losing interest in this conversation, I’m afraid. I think we are talking past each other; I don’t think you get what I am trying to say, and probably I’m not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don’t understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I’m saying this now in case you want to just call it a day. ;)
In fact I don’t buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage’s, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage’s, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.
No, actually they don’t. I’ve just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine’s “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.
So there is the situation. Savage’s axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.
I’m happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.
Yeah, it seems like we are talking past each other. Thanks for engaging with me anyway.