To me it seems to highlight the division between the mind and the will. He seems to say that you can control your mind, but you can not control the way your mind makes you control your mind.
Andrew2
To me it seems to highlight the division between the mind and the will. He seems to say that you can control your mind, but you can not control the way your mind makes you control your mind.
Silas,
I see what you’re saying, but I don’t think I have a moral obligation to take every available opportunity to make money. I’m reminded of an event when I was about 10 years old: I took some small change and threw it in the trash. I don’t remember why I did it, but I do remember that my dad was really offended. But, hey, it was my money. Betting is fine but I don’t see why it should be privileged over other means of expression.
This seems a little bossy to me. Beyond the issue of transaction costs (“the vig”) and the effort of gathering the information to try to beat the market (this would an intellectual hobby, like blogging, doing crosswords, or following the horses, that would make sense to do if enjoyable in itself), maybe some people don’t want to bet. I have no problem with betting—I enjoy it—but I’m a little puzzled by the statement that people should be betting, or that they have some sort of moral obligation to put their money where their mouth is. Maybe you personally don’t “really believe” things unless you put money on them, but not everybody feels that way.
Eliezer,
OK, one more try. First, you’re picking 3^^^^3 out of the air, so I don’t see why you can’t pick 1/3^^^^3 out of the air also. You’re saying that your priors have to come from some rigorous procedure but your utility comes from simply transcribing what some dude says to you. Second, even if for some reason you really want to work with the utility of 3^^^^3, there’s no good reason for you not to consider the possibility that it’s really −3^^^^3, and so you should be doing the opposite. The issue is not that two huge numbers will exactly cancel out; the point is that you’re making up all the numbers here but are artificially constraining the expected utility differential to be positive.
If I really wanted to consider this example realistically, I’d say that this guy has no magic powers, so I wouldn’t worry about him killing 3^^^^3 people or whatever. A slightly more realistic scenario would be something like a guy with a bomb in a school, in which case I’d defer to the experts (presumably whoever in the police force deals with people like that) on their judgment of how best to calm him down. There I could see an (approximate) probability calculation being relevant, but, again, they key thing would be whether giving him $5 (or whatever) would make him more or less likely to set the fuse. It wouldn’t be appropriate to say a priori that it could only help.
OK, let’s try this one more time:
Even if you don’t accept 1 and 2 above, there’s no reason to expect that the person is telling the truth. He might kill the people even if you give him the $5, or conversely he might not kill them even if you don’t give him the $5.
To put it another way, conditional on this nonexistent person having these nonexistent powers, why should you be so sure that he’s telling the truth? Perhaps you’ll only get what you want by not giving him the $5. To put it mathematically, you’re computing pX, where p is the probability and X is the outcome, and you’re saying that if X is huge, then just about any nonzero p will make pX be large. But you’re forgetting two things: first, if you have the imagination to imagine X to be super-huge, you should be able to have the imagination to imagine p to be super-small. (I.e., if you can talk about 3^^^^3, you can talk about 1/3^^^^3.) Second, once you allow these hypothetical super-large X’s, you have to acknowledge the possibility that you got the sign wrong.
When I do this demo in class (see here for details or here for the brief version), I phrase it as “the percentage of countries in the United Nations that are in Africa.” This seems less ambiguous than Kahneman and Tversky’s phrasing (although, I admit, I haven’t done any experiment to check). It indeed works in the classroom setting, although with smaller effects than reported by Kahneman and Tversky (see page 89 of the linked article above).
- 15 Oct 2021 3:49 UTC; 3 points) 's comment on Is anchoring a reliable cognitive bias? by (
Eliezer,
You write: “I’m sure they had some minor warnings of an al Qaeda plot, but they probably also had minor warnings of mafia activity, nuclear material for sale, and an invasion from Mars.” I doubt they had credible warnings about an invasion from Mars. But, yeah, I’d like the FBI etc. to do their best to stop Al Quaeda plots, Mafia activity, and nuclear material for sale. I wonder if you’re succumbing to a “bias-correction bias” where, because something could be explainable by a bias, you assume it is. Groups of people do make mistakes, some of which could have been anticipated with better organization and planning. I have essentially no knowledge of the U.S. intelligence system, but I wouldn’t let them off the hook just because a criticism could be simply hindsight bias. Sometimes hindsight is valid, right?
Eliezer,
I agree with what you’re saying. But there is something to this “everything is connected” idea. Almost every statistical problem I work on is connected to other statistical problems I’ve worked on, and realizing these connections has been helpful to me.
You write: “There are no surprising facts, only models that are surprised by facts.”
That’s deterministic thinking. Surprising facts happen every once in awhile. Rarely, but occasionally.
But I agree with your general point. Surprise is an indication that you have a problem with your model, or that you have prior information that you have not included in your model.
Robin,
You ask, “would potatoes chips be a ‘waste of taste’, if some people eat too much of them? Is TV a “waste of time”, if some people watch too much? Can we say that there is more of a tendency to buy too many lottery tickets than to do too much of any other thing one can do too much of?”
I think much of your question is better addressed to the Eliezer, who wrote the original entry with the “waste of hope” phrase. In any case, if someone buys so many lottery tickets that it interferes with other aspects of life (e.g., not being able to pay the rent or whatever), then, yeah, that seems like a problem. Maybe it’s not a problem for such a person, but if it was someone I was close to, I’d be worried. Certainly there are people who have problems with food, drugs, maybe TV too, so I wouldn’t single out gambling as being uniquely troublesome. I was just distinguishing your vision of the occasional lottery ticket (a small part of one’s “portfolio of dreams”) from something that’s habitual, maybe harmful to one’s other goals in life, and maybe not even so much fun.
Robin,
I think the concern is not with people who buy the occasional lottery ticket for fun but with addicts who gamble away a large proportion of their available money.
Eliezer ,
Just to be clear . . . going back to your first paragraph, that 0.5 is a prior probability for the outcome of one draw from the urn (that is, for the random variable that equals 1 if the ball is red and 0 if the ball is white). But, as you point out, 0.5 is not a prior probability for the series of ten draws. What you’re calling a “prior” would typically be called a “model” by statisticians. Bayesians traditionally divide a model into likelihood, prior, and hyperprior, but as you implicitly point out, the dividing line between these is not clear: ultimately, they’re all part of the big model.
To me it seems to highlight the division between the mind and the will. He seems to say that you can control your mind, but you can not control the way your mind makes you control your mind.