I made an omission mistake in just saying “sampling from noisy posteriors,” note I didn’t say they were performing unbiased sampling.
To extend the Psychology example: a study could be considered a sampling technique of the noisy posterior. You appear to be arguing that the extent to which this is a biased sample is a “skill issue.”
I’m arguing that it is often very difficult to perform unbiased sampling in some fields; the issue might be a property of the posterior and not that the researcher has a weak prefrontal cortex. In this framing it would totally make sense if two researchers studying the same/correlated posterior(s) are biased in the same direction–its the same posterior!
I think you’ve made a motte-and-bailey argument:
Motte: The payoff structure of the cosmic flip/St. Petersburg Paradox applied to the real world is actually much better than double-or-nothing, and therefore you should play the game.
Bailey: SBF was correct in saying you should play the double-or-nothing St. Petersburg Paradox game.
Your motte is definitely defensible. Obviously, you can alter the payoff structure of the game to a point where you should play it.
That does not mean “there’s no real paradox” , it just means you are no longer talking about the paradox. SBF literally said he would take the game in the specific case where the game was double-or-nothing. Totally different!
This ends my issue with your argument, but I’ll also share my favorite anti-St. Petersburg Paradox argument since you didn’t really touch on any of the issues it connects to. In short: the definition of expected value as the mean outcome is inappropriate in this scenario and we should instead use the median outcome.
This paper makes the argument better than I can if you’re curious, but here’s my concise summary:
Mean values are perhaps appropriate if we play the game many (or infinity) times. In these situations, through the law of large numbers, the mean outcome of the games played will approach the mean interpretation of expected value.
For a single play-through (as in the thought experiment) the mean is not appropriate, as the law of large numbers does not apply. Instead, we should value the game by its median outcome: the outcome one should reasonably expect.
Indeed, if you have people actually play this game, their betting behavior is more consistent with an intuition of median expected value (this is tested in the paper).
There’s an argument Median EV is the better interpretation even when playing multiple times. In these situations you can think of the game as “playing the game multiple times, once.” This resolves the paradox in all but the infinite cases.
If you use the median interpretation of EV for finite trials of the game, there is no paradox.
A personal gripe: I find it more than a little stupid that the “expected value” is a value you don’t actually “expect” to observe very frequently when sampling highly skewed distributions.
Mathematicians and Economists have taken issue with the mean definition of EV basically as long as it has existed. Regardless of whether or not you agree with it, it seems pretty obvious to me that it is inappropriate to use the mean to value single trial outcomes.
So maybe in the real world we should play the game, but I firmly believe we should value the game using medians and not means. Do we get to play the world outcome optimization game multiple/infinite times? Obviously not.