(As per usual, my comments are intended not to convince but to outline my thinking, and potentially have holes poked in it. I wouldn’t be willing to spend time writing as many paragraphs as I do if I thought there was 0 chance I’d end up learning something new as a result!)
I don’t think either the gambling or market analogies really shows either that the risk-uncertainty distinction makes sense in categorical terms, or that it’s useful. I think they actually show a large collection of small, different issues, which means my explanation of why I think that may be a bit messy.
The house sets up the rules so they have something like a 3% edge on all the customers. They have no idea on what any given bet will pay off for them but over all the bets over the whole year, they can be pretty sure that they keep 3% if all money put down during the year.
I think this is true, but that “something like” and “pretty sure” are doing a lot of the work here. The house can’t be absolutely certain that there’s 3% edge, for a whole range of reasons—e.g., there could be card-counters at some point, the house’s staff may go against their instructions in order to favour their friends or some pretty women, the house may have somehow simply calculated this wrong, or something more outlandish like Eliezer’s dark lords of the Matrix. Like with my points in the post, in practice, I’d be happy making my bets as if these issues weren’t issues, but they still prevent absolute certainty.
I don’t think you were explicitly trying to say that the house does have absolute certainty (especially given the following paragraph), so that’s sort of me attacking a straw man. But I think the typical idea of the distinction being categorical has to be premised on absolute certainty, and I think you may still sort-of be leaning on that idea in some ways, so it seems worth addressing that idea first.
But lets know thing about a case of a gambler. What if he can get the same edge as the house was getting? Is he really in the same situation of risk management as the casino? I think that depends. We might know what the probabilities are and the shape of the function but we don’t really know how many times we need to play before our sampling starts to reflect that distribution—statistics gives us some ideas but that also has a random element to it. The gambler’s has to decide if he has a budget to make it through far enough to take advantage of the underlying probabilities—that is to take advantage of “managing the risk”.
If the gambler cannot figure that out, or knows for a fact there are insufficient funds, do those probabilities really provide useful information on what to expect? To me this is then uncertainty. The gamble simply doesn’t get the opportunity to repeat and so get the expected return.
I think what’s really going on here in your explicit comment is:
differences in the size of the confidence intervals; the house can indeed be more confident about their understanding of the odds. But it’s not an absolute difference; they can’t be sure, and the gambler can know something.
So I think it’s sort-of true to say “We might know what the probabilities are and the shape of the function but we don’t really know how many times we need to play before our sampling starts to reflect that distribution—statistics gives us some ideas but that also has a random element to it.” But here the “don’t really know”, “some ideas”, and “a random element” seem to me to be doing a lot of the work—this isn’t absolutely different from the house’s situation; in both cases, there can be a prior, there’s some data, and there’s some randomness and uncertainty. The house has a way better grounded prior, way more data, and way less uncertainty, but it’s not a categorical difference, as far as I can see.
an extreme case of diminishing returns to money. Becoming totally broke is really bad, and once you get there you can’t get back. So even if he does have really good reason to believe gambling has positive expected value in dollars, that doesn’t mean it has positive expected utility. I think it’s very common to conflate the two, and that this is what underlies a lot of faulty ideas (i.e., that we should be genuinely risk-averse, in terms of utility—it makes a lot of sense to avoid the colloquial sense of risk, and it makes sense to avoid many gambles with positive expected dollar value, but that all makes sense if we’re risk-neutral in terms of utility).
So it’s very easy to reach the reasonable-seeming conclusion that gambling is unwiseeven if there’s positive expected value in dollar terms, without leaning on the idea of a risk-uncertainty distinction (to be honest, even in terms of degrees—we don’t even need to talk about the sizes of the confidence intervals, in this case).
I also think there’s perhaps two more things implicitly going on in that analogy, which aren’t key points but might slightly nudge one’s intuitions:
We have a great deal of data, and very strong theoretical reasons, pointing towards the idea that, in reality, gambling has negative expected value in dollar terms. This could mean that, if Framing A seems to suggest one shouldn’t gamble, and Framing B suggests one should, Framing A scores points with our intuitions. And this could occur even if we stipulate that there’s positive expected value, because system 1 may not get that memo. (But this is a very small point, and I do think it’s acceptable to use analogies that break our intuitions a bit, I just think it should be acknowledged.)
We also probably have a strong prior that gamblers are very often overconfident. It seems likely to me that, if you have a bias towards overconfidence, then the less grounding you have for your probabilities, the more likely they are to be wrong. That is, it’s not just that the value you happen to receive could be further from what you expect, in either direction (compared to if you had a better-grounded probability), but that your perceived expected value is probably off the reasonable expected value by more, because your beliefs had more “room to manoeuvre” and were biased to manoeuvre i none direction in particular. So the less trustworthy our probability, the more likely it is we shouldn’t gamble, as long as we’re biased towards overconfidence, but we can discuss this in terms of degrees rather than a categorical distinction.
In this type of situation, perhaps rather than trying to calculate all the odds, wagers and pay-offs maybe a simple rule is better if someone wants to gamble.
I think that’s likely true, but I think that’s largely because of a mixture of the difficulty of computing the odds for humans (it’s just time consuming and we’re likely to make mistakes), the likelihood that the gambler will be overconfident so he should probably instead adopt a blanket heuristic to protect him from himself, and the fact that being broke is way worse than being rich is good. (Also, in realistic settings, because the odds are bad anyway—they pretty much have to be, for the casino to keep the lights on—so there’s no point calculating; we already know which side of the decision-relevant threshold the answer must be on.) I don’t think there’s any need to invoke the risk-uncertainty distinction.
And finally, regarding the ideas of iterating and repeating—I think that’s really important, in the sense that it gives us a lot more, very relevant data, and shifts our estimates towards the truth and reduces their uncertainty. But I think on a fundamental level, it’s just evidence, like any other evidence. Roughly speaking, we always start with a prior, and then update it as we see evidence. So I don’t think there’s an absolute difference between “having an initial guess about the odds and then updating based on 100 rounds of gambling” and “having an initial guess about the odds and then updating based on realising that the casino has paid for this massive building, all these staff, etc., and seem unlikely to make enough money for that from drinks and food alone”. (Consider also that you’re never iterating or repeating exactly the same situation.)
Most of what I’ve said here leaves open the possibility that the risk-uncertainty distinction—perhaps even imagined as categorical—is a useful concept in practice (though I tentatively argue against that here). But it seems to me that I still haven’t encountered an argument that it actually makes sense as a categorical division.
My point about iterations is not about getting better estimates for the probabilities. The probabilities are known, defined quantities, in the argument. The difference is that in some setting one will have the luxury of iterating and so able to actually average the results towards that expect value. If you can iterate an infinite number of times your results converge to that expected value.
Seem one implication of your position is that people should be indifferent to the following two settings where the expected payoff is the same:
1) They toss a fair coin as many times as they want. If they get heads, they will receive $60, if they get tails they pay $50.
2) They can have the same coin, and same payoffs but only get one toss.
Do you think most peoples decision will be the same? If not, how do you explain the difference.
Maybe this is a matter of different definitions/connotations of “gamble”. Given that the odds are in the casino’s favour, and that they can repeat/iterate the games a huge number of times, the results do indeed tend to converge to the expected value, which is in the casino’s favour—I’m in total agreement there. The odds that they’d lose out, given those facts, are infinitesimal and negligible for pretty much all practical purposes. But it’s like they asymptotically approach zero, not that they literally are zero.
The Second Law of Thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed.
But in any case, it seems your key point there, which I actually agree with, is that the deal is better for the casino (partly) because they get to play the odds more often than an individual gambler does, so the value they actually get is more likely to be close to the expected value than the individual gambler’s is. But I think the reason this is better is because of the diminishing marginal utility of money—losing all your money is way worse than doubling it is good—and not because of the risk-uncertainty distinction itself.
(Though there could be relevant interplays between the magnitude of one’s uncertainty and the odds one ends up in a really bad position, which might make one more reluctant to avoid “bets” of any kind when the uncertainty is greater. But again, it’s helpful to consider whether you’re thinking about expected utility or expected value of some other unit, and it also seems unnecessary to use a categorical risk-uncertainty distinction.)
Seem one implication of your position is that people should be indifferent to the following two settings where the expected payoff is the same:
1) They toss a fair coin as many times as they want. If they get heads, they will receive $60, if they get tails they pay $50.
2) They can have the same coin, and same payoffs but only get one toss.
Do you think most peoples decision will be the same? If not, how do you explain the difference.
Regarding whether I think people’s decisions will be the same, I think it’s useful to make clear the distinction between descriptive and normative claims. As I say in footnote 1:
Additionally, it’s sometimes unclear whether proponents of the distinction are merely arguing (a) that people perceive such a distinction, so it’s useful to think about and research it in order to understand how people are likely to think and behave, or are actually arguing (b) that people should perceive such a distinction, or that such a distinction “really exists”, “out there in the world”. It seems to me that (a) is pretty likely to be true, but wouldn’t have major consequences for how we rationallyshould make decisions when not certain. Thus, in this post I focus exclusively on (b).
So my position doesn’t really directlyimply anything about what people will decide. It’s totally possible for the risk-uncertainty distinction to not “actually make sense” and yet still be something that economists, psychologists, etc. should be aware of as something people believe in or act as if they believe in. (Like how it’s useful to study biases or folk biology or whatever, to predict behaviours, without having to imagine that the biases or folk biology actually reflect reality perfectly.) But I’d argue that such researchers should make it clear when they’re discussing what people do vs when they’re discussing what they should do, or what’s rational, or whatever.
(If your claims have a lot to do with what people actually think like, rather than normative claims, then we may be more in agreement than it appears.)
But as for what people should do in that situation, I think my position doesn’t imply people should be indifferent to that, because getting diminishing marginal utility from money doesn’t conflict with reality.
In the extreme version of that situation, if someone starts with $150 as their entire set of assets, and takes bet 2, then there’s a 50% chance they’ll lose a third of everything they own. That’s really bad for them. The 50% chance they win $60 could plausibly not make up for that.
If the same person takes bet 1, the odds that they end up worse off go down, because, as you say, the actual results will tend to converge towards the (positive in dollar terms) expected value as one gets more trials/repetitions.
So it seems to me that it’s reasonable to see bet 1 as better than bet 2 (depending on an individual’s utility function for money and how much money they currently have), but that this doesn’t require imagining a categorical risk-uncertainty distinction.
(As per usual, my comments are intended not to convince but to outline my thinking, and potentially have holes poked in it. I wouldn’t be willing to spend time writing as many paragraphs as I do if I thought there was 0 chance I’d end up learning something new as a result!)
I don’t think either the gambling or market analogies really shows either that the risk-uncertainty distinction makes sense in categorical terms, or that it’s useful. I think they actually show a large collection of small, different issues, which means my explanation of why I think that may be a bit messy.
I think this is true, but that “something like” and “pretty sure” are doing a lot of the work here. The house can’t be absolutely certain that there’s 3% edge, for a whole range of reasons—e.g., there could be card-counters at some point, the house’s staff may go against their instructions in order to favour their friends or some pretty women, the house may have somehow simply calculated this wrong, or something more outlandish like Eliezer’s dark lords of the Matrix. Like with my points in the post, in practice, I’d be happy making my bets as if these issues weren’t issues, but they still prevent absolute certainty.
I don’t think you were explicitly trying to say that the house does have absolute certainty (especially given the following paragraph), so that’s sort of me attacking a straw man. But I think the typical idea of the distinction being categorical has to be premised on absolute certainty, and I think you may still sort-of be leaning on that idea in some ways, so it seems worth addressing that idea first.
I think what’s really going on here in your explicit comment is:
differences in the size of the confidence intervals; the house can indeed be more confident about their understanding of the odds. But it’s not an absolute difference; they can’t be sure, and the gambler can know something.
So I think it’s sort-of true to say “We might know what the probabilities are and the shape of the function but we don’t really know how many times we need to play before our sampling starts to reflect that distribution—statistics gives us some ideas but that also has a random element to it.” But here the “don’t really know”, “some ideas”, and “a random element” seem to me to be doing a lot of the work—this isn’t absolutely different from the house’s situation; in both cases, there can be a prior, there’s some data, and there’s some randomness and uncertainty. The house has a way better grounded prior, way more data, and way less uncertainty, but it’s not a categorical difference, as far as I can see.
an extreme case of diminishing returns to money. Becoming totally broke is really bad, and once you get there you can’t get back. So even if he does have really good reason to believe gambling has positive expected value in dollars, that doesn’t mean it has positive expected utility. I think it’s very common to conflate the two, and that this is what underlies a lot of faulty ideas (i.e., that we should be genuinely risk-averse, in terms of utility—it makes a lot of sense to avoid the colloquial sense of risk, and it makes sense to avoid many gambles with positive expected dollar value, but that all makes sense if we’re risk-neutral in terms of utility).
So it’s very easy to reach the reasonable-seeming conclusion that gambling is unwise even if there’s positive expected value in dollar terms, without leaning on the idea of a risk-uncertainty distinction (to be honest, even in terms of degrees—we don’t even need to talk about the sizes of the confidence intervals, in this case).
I also think there’s perhaps two more things implicitly going on in that analogy, which aren’t key points but might slightly nudge one’s intuitions:
We have a great deal of data, and very strong theoretical reasons, pointing towards the idea that, in reality, gambling has negative expected value in dollar terms. This could mean that, if Framing A seems to suggest one shouldn’t gamble, and Framing B suggests one should, Framing A scores points with our intuitions. And this could occur even if we stipulate that there’s positive expected value, because system 1 may not get that memo. (But this is a very small point, and I do think it’s acceptable to use analogies that break our intuitions a bit, I just think it should be acknowledged.)
We also probably have a strong prior that gamblers are very often overconfident. It seems likely to me that, if you have a bias towards overconfidence, then the less grounding you have for your probabilities, the more likely they are to be wrong. That is, it’s not just that the value you happen to receive could be further from what you expect, in either direction (compared to if you had a better-grounded probability), but that your perceived expected value is probably off the reasonable expected value by more, because your beliefs had more “room to manoeuvre” and were biased to manoeuvre i none direction in particular. So the less trustworthy our probability, the more likely it is we shouldn’t gamble, as long as we’re biased towards overconfidence, but we can discuss this in terms of degrees rather than a categorical distinction.
I think that’s likely true, but I think that’s largely because of a mixture of the difficulty of computing the odds for humans (it’s just time consuming and we’re likely to make mistakes), the likelihood that the gambler will be overconfident so he should probably instead adopt a blanket heuristic to protect him from himself, and the fact that being broke is way worse than being rich is good. (Also, in realistic settings, because the odds are bad anyway—they pretty much have to be, for the casino to keep the lights on—so there’s no point calculating; we already know which side of the decision-relevant threshold the answer must be on.) I don’t think there’s any need to invoke the risk-uncertainty distinction.
And finally, regarding the ideas of iterating and repeating—I think that’s really important, in the sense that it gives us a lot more, very relevant data, and shifts our estimates towards the truth and reduces their uncertainty. But I think on a fundamental level, it’s just evidence, like any other evidence. Roughly speaking, we always start with a prior, and then update it as we see evidence. So I don’t think there’s an absolute difference between “having an initial guess about the odds and then updating based on 100 rounds of gambling” and “having an initial guess about the odds and then updating based on realising that the casino has paid for this massive building, all these staff, etc., and seem unlikely to make enough money for that from drinks and food alone”. (Consider also that you’re never iterating or repeating exactly the same situation.)
Most of what I’ve said here leaves open the possibility that the risk-uncertainty distinction—perhaps even imagined as categorical—is a useful concept in practice (though I tentatively argue against that here). But it seems to me that I still haven’t encountered an argument that it actually makes sense as a categorical division.
Micheal, do so research on the way casinos work. The casino owners don’t gamble on their income. Here is a link to consider: https://www.quora.com/How-do-casinos-ultimately-make-money
My point about iterations is not about getting better estimates for the probabilities. The probabilities are known, defined quantities, in the argument. The difference is that in some setting one will have the luxury of iterating and so able to actually average the results towards that expect value. If you can iterate an infinite number of times your results converge to that expected value.
Seem one implication of your position is that people should be indifferent to the following two settings where the expected payoff is the same:
1) They toss a fair coin as many times as they want. If they get heads, they will receive $60, if they get tails they pay $50.
2) They can have the same coin, and same payoffs but only get one toss.
Do you think most peoples decision will be the same? If not, how do you explain the difference.
Maybe this is a matter of different definitions/connotations of “gamble”. Given that the odds are in the casino’s favour, and that they can repeat/iterate the games a huge number of times, the results do indeed tend to converge to the expected value, which is in the casino’s favour—I’m in total agreement there. The odds that they’d lose out, given those facts, are infinitesimal and negligible for pretty much all practical purposes. But it’s like they asymptotically approach zero, not that they literally are zero.
It seems very similar to the case of entropy:
But in any case, it seems your key point there, which I actually agree with, is that the deal is better for the casino (partly) because they get to play the odds more often than an individual gambler does, so the value they actually get is more likely to be close to the expected value than the individual gambler’s is. But I think the reason this is better is because of the diminishing marginal utility of money—losing all your money is way worse than doubling it is good—and not because of the risk-uncertainty distinction itself.
(Though there could be relevant interplays between the magnitude of one’s uncertainty and the odds one ends up in a really bad position, which might make one more reluctant to avoid “bets” of any kind when the uncertainty is greater. But again, it’s helpful to consider whether you’re thinking about expected utility or expected value of some other unit, and it also seems unnecessary to use a categorical risk-uncertainty distinction.)
Regarding whether I think people’s decisions will be the same, I think it’s useful to make clear the distinction between descriptive and normative claims. As I say in footnote 1:
So my position doesn’t really directly imply anything about what people will decide. It’s totally possible for the risk-uncertainty distinction to not “actually make sense” and yet still be something that economists, psychologists, etc. should be aware of as something people believe in or act as if they believe in. (Like how it’s useful to study biases or folk biology or whatever, to predict behaviours, without having to imagine that the biases or folk biology actually reflect reality perfectly.) But I’d argue that such researchers should make it clear when they’re discussing what people do vs when they’re discussing what they should do, or what’s rational, or whatever.
(If your claims have a lot to do with what people actually think like, rather than normative claims, then we may be more in agreement than it appears.)
But as for what people should do in that situation, I think my position doesn’t imply people should be indifferent to that, because getting diminishing marginal utility from money doesn’t conflict with reality.
In the extreme version of that situation, if someone starts with $150 as their entire set of assets, and takes bet 2, then there’s a 50% chance they’ll lose a third of everything they own. That’s really bad for them. The 50% chance they win $60 could plausibly not make up for that.
If the same person takes bet 1, the odds that they end up worse off go down, because, as you say, the actual results will tend to converge towards the (positive in dollar terms) expected value as one gets more trials/repetitions.
So it seems to me that it’s reasonable to see bet 1 as better than bet 2 (depending on an individual’s utility function for money and how much money they currently have), but that this doesn’t require imagining a categorical risk-uncertainty distinction.