View of income disparity as a problem that overrides expected wealth among your possible selfs is a very interesting angle.
Does this mean that there are voting schemes that are structurally impossible to gerrymander? Do they inevitably fail other voting desiderata?
Wouldn’t it also make sense to treat the outside-view to be updated. To treat yourself as beating the market if you are beating the market. Or is it that “unknown unknowns” and “I know that I don’t know” kind of factors never shift? I read that the recommendation is that when you are wrong one should be less agentic and do the null behaviour (kind of like the action version of null hypothesis). The angle I used to apply is that if you are wrong you should update to be more right. But this recommendation works even if you don’t know how to improve. Halt and do what you were previously doing instead of totally freezing.
So am I correct that taking Kelly betting seriously leads to recommendation that St. Petersburg should be rejected? I am also thinking of a continous version of the setup where at each timestep you can stake the amount of money you want for double or nothing. If double is a tiinsy tiny bit more probable than nothing you only stake very little money. And at exactly even odds you stake exactly 0 money. Is this not a solution to the Petersburg blowup?
Seems there are recommendations that are in violation of maximising for expected value and for clarity of myself and other I will restate more explicitly. You have 100 money and are considering two bets. Bet A is 2⁄3 for 2.1 (2+0.1) times the bet and 1⁄3 chance of nothing, the bet taking degree is 66.66.. . Bet B is 2⁄6 chance of 4.2 (2*(2+0.1)) times the bet and 4⁄6 nothing, the bet taking degree is 33.33. The expectation on both is 1.4 but the bets don’t get treated the same, we are not ambivalent between them. We prefer A and can do so without providing a risk tolerance profile. This is probably mostly additional structure on top, most comparisons that go otherwise are overriding ambivalences to favour one side. Same expected values point in the same direction but not at the same magnitude.
It is interesting to think whether there are exceptions and this new scheme would recommend contrary to pure EV expectation. It would seem that less volatile scenarios move faster with difference in outcome intensity. As with expectation value 1.4 we had two bets with 66.66 and 33.33. Are there any bets that have bet taking degrees between those that have a lesser expected value?
I suspect the case is that provided each bet alone we would engage then to those 66.66 and 33.33 degrees but together we are not putting in the whole 100 (66.66+33.33) if offered together.
If some dyper scenario happens at a probability p, then even if the utility shoots throught the roof or is roofless, the maximum that scenario can command is that p fraction and can’t go over that. You are not allowed to bet 1000 out of 100. And you can’t recommend harder than “100% yes”.
View of income disparity as a problem that overrides expected wealth among your possible selfs is a very interesting angle.
Does this mean that there are voting schemes that are structurally impossible to gerrymander? Do they inevitably fail other voting desiderata?
Wouldn’t it also make sense to treat the outside-view to be updated. To treat yourself as beating the market if you are beating the market. Or is it that “unknown unknowns” and “I know that I don’t know” kind of factors never shift? I read that the recommendation is that when you are wrong one should be less agentic and do the null behaviour (kind of like the action version of null hypothesis). The angle I used to apply is that if you are wrong you should update to be more right. But this recommendation works even if you don’t know how to improve. Halt and do what you were previously doing instead of totally freezing.
So am I correct that taking Kelly betting seriously leads to recommendation that St. Petersburg should be rejected? I am also thinking of a continous version of the setup where at each timestep you can stake the amount of money you want for double or nothing. If double is a tiinsy tiny bit more probable than nothing you only stake very little money. And at exactly even odds you stake exactly 0 money. Is this not a solution to the Petersburg blowup?
Seems there are recommendations that are in violation of maximising for expected value and for clarity of myself and other I will restate more explicitly. You have 100 money and are considering two bets. Bet A is 2⁄3 for 2.1 (2+0.1) times the bet and 1⁄3 chance of nothing, the bet taking degree is 66.66.. . Bet B is 2⁄6 chance of 4.2 (2*(2+0.1)) times the bet and 4⁄6 nothing, the bet taking degree is 33.33. The expectation on both is 1.4 but the bets don’t get treated the same, we are not ambivalent between them. We prefer A and can do so without providing a risk tolerance profile. This is probably mostly additional structure on top, most comparisons that go otherwise are overriding ambivalences to favour one side. Same expected values point in the same direction but not at the same magnitude.
It is interesting to think whether there are exceptions and this new scheme would recommend contrary to pure EV expectation. It would seem that less volatile scenarios move faster with difference in outcome intensity. As with expectation value 1.4 we had two bets with 66.66 and 33.33. Are there any bets that have bet taking degrees between those that have a lesser expected value?
I suspect the case is that provided each bet alone we would engage then to those 66.66 and 33.33 degrees but together we are not putting in the whole 100 (66.66+33.33) if offered together.
If some dyper scenario happens at a probability p, then even if the utility shoots throught the roof or is roofless, the maximum that scenario can command is that p fraction and can’t go over that. You are not allowed to bet 1000 out of 100. And you can’t recommend harder than “100% yes”.