All acts, in Savage’s system, have a defined, finite expected value, and the St. Petersburg game and its variants do not exist.
The first clause does not imply the second. The St. Petersburg game variant in which the payoffs are utility does not exist, but the St. Petersburg game variant in which the payoffs are dollars does exist. (Or does something else in Savage’s framework rule it out?)
The St. Petersburg game variant in which the payoffs are dollars can only exist in Savage’s system if there is a limit on the number of utilons that any amount of dollars could buy. No more utility than that exists. But that game is not paradoxical. It has a finite expected value in utilons, and that is an upper bound on the fee it is worth paying to play the game.
In other words, the St. Petersburg game (dollars variant) can exist just fine in Savage’s system, it’s only the utility variant that can’t. Good. What about in your system? Can the dollars variant exist?
If the dollars variant can exist, what happens in your system when someone decides that their utility function is linear in dollars? Does your system (like Savage’s) say they can’t do that, that utility must be bounded in dollars at least?
With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don’t regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.
St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.
Both types are excluded by Savage’s axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.
Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.
I think maybe our disagreement has to do with what is unnatural. I don’t think it’s unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn’t have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars—which did seem unnatural. But maybe I’ve been misinterpreting you.
Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.
Overall I’m losing interest in this conversation, I’m afraid. I think we are talking past each other; I don’t think you get what I am trying to say, and probably I’m not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don’t understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I’m saying this now in case you want to just call it a day. ;)
What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that.
In fact I don’t buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage’s, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage’s, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.
No, actually they don’t. I’ve just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine’s “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.
So there is the situation. Savage’s axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.
I’m happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.
The first clause does not imply the second. The St. Petersburg game variant in which the payoffs are utility does not exist, but the St. Petersburg game variant in which the payoffs are dollars does exist. (Or does something else in Savage’s framework rule it out?)
The St. Petersburg game variant in which the payoffs are dollars can only exist in Savage’s system if there is a limit on the number of utilons that any amount of dollars could buy. No more utility than that exists. But that game is not paradoxical. It has a finite expected value in utilons, and that is an upper bound on the fee it is worth paying to play the game.
In other words, the St. Petersburg game (dollars variant) can exist just fine in Savage’s system, it’s only the utility variant that can’t. Good. What about in your system? Can the dollars variant exist?
If the dollars variant can exist, what happens in your system when someone decides that their utility function is linear in dollars? Does your system (like Savage’s) say they can’t do that, that utility must be bounded in dollars at least?
With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don’t regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.
St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.
Both types are excluded by Savage’s axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.
Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.
I think maybe our disagreement has to do with what is unnatural. I don’t think it’s unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn’t have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars—which did seem unnatural. But maybe I’ve been misinterpreting you.
Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the “0 is not a probability” thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.
Overall I’m losing interest in this conversation, I’m afraid. I think we are talking past each other; I don’t think you get what I am trying to say, and probably I’m not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don’t understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I’m saying this now in case you want to just call it a day. ;)
In fact I don’t buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage’s, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage’s, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.
No, actually they don’t. I’ve just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine’s “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.
So there is the situation. Savage’s axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.
I’m happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.
Yeah, it seems like we are talking past each other. Thanks for engaging with me anyway.