This is an interesting theorem which helps illuminate the relationship between unbounded utilities and St Petersburg gambles. I particularly appreciate that you don’t make an explicit assumption that the values of gambles must be representable by real numbers which is very common, but unhelpful in a setting like this. However, I do worry a bit about the argument structure.
The St Petersburg gamble is a famously paradox-riddled case. That is, it is a very difficult case where it isn’t clear what to say, and many theories seem to produce outlandish results. When this happens, it isn’t so impressive to say that we can rule out an opposing theory because in that paradox-riddled situation it would lead to strange results. It strikes me as similar to saying that a rival theory leads to strange result in variable population-size cases so we can reject it (when actually, all theories do), or that it leads to strange results in infinite population cases (when again, all theories do).
Even if one had a proof that an alternative theory doesn’t lead to strange conclusions in the St Petersburg gamble, I don’t think this would count all that much in its favour. As it seems plausible to me that various rules of decision theory that were developed in the cleaner cases of finite possibility spaces (or well-behaved infinite spaces) need to be tweaked to account for more pathological possibility spaces. For a simple example, I’m sympathetic to the sure thing principle, but it directly implies that the St Petersburg Gamble is better than itself, because an unresolved gamble is better than a resolved one, no matter how the latter was resolved. My guess is that this means the sure thing principle needs to have its scope limited to exclude gambles whose value is higher than that of any of their resolutions.
Thanks — this looks promising.
One thing I noticed is that there is an interesting analogy between your model and a fairly standard model in economics where society consists of a representative agent in each time period (representing something like a generation, but without overlap) each trying to maximise its own utility. They can plan based on the utilities of subsequent generations (e.g. predicting that the next generation will undo this generation’s policies on some topic) but they don’t inherently value those utilities. This is then understood via the perspective of a planner who wants to maximise the (discounted) sum of future utilities, even though each agent in the model is only trying to maximise their own utility.
This framework is rich enough to exhibit various inter-generational policy challenges, such as an intergenerational prisoner’s dilemma where you can defect or cooperate on the following generation or the possibility of the desire of a generation to tie the hands of future generations or even the desire to stop future generations tying the hands of generations that follow them.