My point about iterations is not about getting better estimates for the probabilities. The probabilities are known, defined quantities, in the argument. The difference is that in some setting one will have the luxury of iterating and so able to actually average the results towards that expect value. If you can iterate an infinite number of times your results converge to that expected value.
Seem one implication of your position is that people should be indifferent to the following two settings where the expected payoff is the same:
1) They toss a fair coin as many times as they want. If they get heads, they will receive $60, if they get tails they pay $50.
2) They can have the same coin, and same payoffs but only get one toss.
Do you think most peoples decision will be the same? If not, how do you explain the difference.
Maybe this is a matter of different definitions/connotations of “gamble”. Given that the odds are in the casino’s favour, and that they can repeat/iterate the games a huge number of times, the results do indeed tend to converge to the expected value, which is in the casino’s favour—I’m in total agreement there. The odds that they’d lose out, given those facts, are infinitesimal and negligible for pretty much all practical purposes. But it’s like they asymptotically approach zero, not that they literally are zero.
The Second Law of Thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed.
But in any case, it seems your key point there, which I actually agree with, is that the deal is better for the casino (partly) because they get to play the odds more often than an individual gambler does, so the value they actually get is more likely to be close to the expected value than the individual gambler’s is. But I think the reason this is better is because of the diminishing marginal utility of money—losing all your money is way worse than doubling it is good—and not because of the risk-uncertainty distinction itself.
(Though there could be relevant interplays between the magnitude of one’s uncertainty and the odds one ends up in a really bad position, which might make one more reluctant to avoid “bets” of any kind when the uncertainty is greater. But again, it’s helpful to consider whether you’re thinking about expected utility or expected value of some other unit, and it also seems unnecessary to use a categorical risk-uncertainty distinction.)
Seem one implication of your position is that people should be indifferent to the following two settings where the expected payoff is the same:
1) They toss a fair coin as many times as they want. If they get heads, they will receive $60, if they get tails they pay $50.
2) They can have the same coin, and same payoffs but only get one toss.
Do you think most peoples decision will be the same? If not, how do you explain the difference.
Regarding whether I think people’s decisions will be the same, I think it’s useful to make clear the distinction between descriptive and normative claims. As I say in footnote 1:
Additionally, it’s sometimes unclear whether proponents of the distinction are merely arguing (a) that people perceive such a distinction, so it’s useful to think about and research it in order to understand how people are likely to think and behave, or are actually arguing (b) that people should perceive such a distinction, or that such a distinction “really exists”, “out there in the world”. It seems to me that (a) is pretty likely to be true, but wouldn’t have major consequences for how we rationallyshould make decisions when not certain. Thus, in this post I focus exclusively on (b).
So my position doesn’t really directlyimply anything about what people will decide. It’s totally possible for the risk-uncertainty distinction to not “actually make sense” and yet still be something that economists, psychologists, etc. should be aware of as something people believe in or act as if they believe in. (Like how it’s useful to study biases or folk biology or whatever, to predict behaviours, without having to imagine that the biases or folk biology actually reflect reality perfectly.) But I’d argue that such researchers should make it clear when they’re discussing what people do vs when they’re discussing what they should do, or what’s rational, or whatever.
(If your claims have a lot to do with what people actually think like, rather than normative claims, then we may be more in agreement than it appears.)
But as for what people should do in that situation, I think my position doesn’t imply people should be indifferent to that, because getting diminishing marginal utility from money doesn’t conflict with reality.
In the extreme version of that situation, if someone starts with $150 as their entire set of assets, and takes bet 2, then there’s a 50% chance they’ll lose a third of everything they own. That’s really bad for them. The 50% chance they win $60 could plausibly not make up for that.
If the same person takes bet 1, the odds that they end up worse off go down, because, as you say, the actual results will tend to converge towards the (positive in dollar terms) expected value as one gets more trials/repetitions.
So it seems to me that it’s reasonable to see bet 1 as better than bet 2 (depending on an individual’s utility function for money and how much money they currently have), but that this doesn’t require imagining a categorical risk-uncertainty distinction.
Micheal, do so research on the way casinos work. The casino owners don’t gamble on their income. Here is a link to consider: https://www.quora.com/How-do-casinos-ultimately-make-money
My point about iterations is not about getting better estimates for the probabilities. The probabilities are known, defined quantities, in the argument. The difference is that in some setting one will have the luxury of iterating and so able to actually average the results towards that expect value. If you can iterate an infinite number of times your results converge to that expected value.
Seem one implication of your position is that people should be indifferent to the following two settings where the expected payoff is the same:
1) They toss a fair coin as many times as they want. If they get heads, they will receive $60, if they get tails they pay $50.
2) They can have the same coin, and same payoffs but only get one toss.
Do you think most peoples decision will be the same? If not, how do you explain the difference.
Maybe this is a matter of different definitions/connotations of “gamble”. Given that the odds are in the casino’s favour, and that they can repeat/iterate the games a huge number of times, the results do indeed tend to converge to the expected value, which is in the casino’s favour—I’m in total agreement there. The odds that they’d lose out, given those facts, are infinitesimal and negligible for pretty much all practical purposes. But it’s like they asymptotically approach zero, not that they literally are zero.
It seems very similar to the case of entropy:
But in any case, it seems your key point there, which I actually agree with, is that the deal is better for the casino (partly) because they get to play the odds more often than an individual gambler does, so the value they actually get is more likely to be close to the expected value than the individual gambler’s is. But I think the reason this is better is because of the diminishing marginal utility of money—losing all your money is way worse than doubling it is good—and not because of the risk-uncertainty distinction itself.
(Though there could be relevant interplays between the magnitude of one’s uncertainty and the odds one ends up in a really bad position, which might make one more reluctant to avoid “bets” of any kind when the uncertainty is greater. But again, it’s helpful to consider whether you’re thinking about expected utility or expected value of some other unit, and it also seems unnecessary to use a categorical risk-uncertainty distinction.)
Regarding whether I think people’s decisions will be the same, I think it’s useful to make clear the distinction between descriptive and normative claims. As I say in footnote 1:
So my position doesn’t really directly imply anything about what people will decide. It’s totally possible for the risk-uncertainty distinction to not “actually make sense” and yet still be something that economists, psychologists, etc. should be aware of as something people believe in or act as if they believe in. (Like how it’s useful to study biases or folk biology or whatever, to predict behaviours, without having to imagine that the biases or folk biology actually reflect reality perfectly.) But I’d argue that such researchers should make it clear when they’re discussing what people do vs when they’re discussing what they should do, or what’s rational, or whatever.
(If your claims have a lot to do with what people actually think like, rather than normative claims, then we may be more in agreement than it appears.)
But as for what people should do in that situation, I think my position doesn’t imply people should be indifferent to that, because getting diminishing marginal utility from money doesn’t conflict with reality.
In the extreme version of that situation, if someone starts with $150 as their entire set of assets, and takes bet 2, then there’s a 50% chance they’ll lose a third of everything they own. That’s really bad for them. The 50% chance they win $60 could plausibly not make up for that.
If the same person takes bet 1, the odds that they end up worse off go down, because, as you say, the actual results will tend to converge towards the (positive in dollar terms) expected value as one gets more trials/repetitions.
So it seems to me that it’s reasonable to see bet 1 as better than bet 2 (depending on an individual’s utility function for money and how much money they currently have), but that this doesn’t require imagining a categorical risk-uncertainty distinction.