Curi, you are just confused. But rather than asking questions or attempting to resolve your own confusion, you’ve gone and loudly, falsely accused some good research of being shoddy. You have got to learn to notice when you’re confused. You are trying to deal with confusing topics, and you have approached them with insufficient humility. You have treated criticisms as foes to be fought, and your own ignorance as a castle to be defended. Until you can stop yourself from doing that, you will make no progress, and we cannot help you.
The conjunction fallacy does not exist, as it claims to, for all X and all Y. That it does exist for specially chosen X, Y and context is incapable of reaching the stated conclusion that it exists for all X and Y.
But the research isn’t claiming that the conjunction fallacy exists for all X and Y. The claim, as I understand it, is that people rely on the representativeness heuristic, which is indeed useful in many contexts but which does not in general obey the probability-theoretic law that P(X&Y)<P(X).
Further evidence for the conjunction fallacy was covered on this site in the post “Conjunction Controversy”.
The strangest thing about this post is how everything in anthropomorphised. It’s as if what is at stake isn’t determining a relatively isolated matter of fact, but rather ostracizing a social rival.
The conjunction fallacy says...
The conjunction fallacy does not exist, as it claims to...
The research is wrong and biased. It should become less wrong by recanting.
...bad explanations can be rejected without testing, and testing them is pointless anyway (because they can just make ad hoc retreats to other bad explanations to avoid refutation by the data. Only good explanations can’t do that.).
Explanations are models, not social beings you can hope to subdue or isolate by making them unpopular.
Something that causes angst needn’t be intending to do so. The frustration this idea causes you is in you, not an evil part of this idea. The facts in question are just the way the world is, not a kid in school who is to be slandered for being your rival, whom you can disempower by convincing others s/he is a very biased bad person.
A conjecture: in general, the chances of being right about something are inversely proportional to how closely your model for that thing matches your self-model, which is what you use by default when you don’t understand the specific causal mechanisms at play. This post is an extreme example of that phenomenon.
If you’re getting angry at patterns of human cognition for “claiming to exist” because they “really” “do not exist”, you’re simply doing it wrong.
Why do you assume people you disagree with are angry?
This could be unpacked in several ways: “Why do you always assume people you disagree with are angry?”, “Why do you sometimes but not always assume people you disagree with are angry?”, and an equivocation which pretends to the advantages of each statement.
“Why do you always assume people you disagree with are angry?”, would be silly for you to say after only having read my few LW posts. If true, it would be a severe reasoning flaw on my part.
“Why do you sometimes but not always assume people you disagree with are angry?”, in turn has several possible meanings. “Assumption” could mean an arbitrary prior I have before considering the subject I am analyzing, or alternatively it could mean an intermediate conclusion that is both reasoned and a basis for other conclusions. It may also mean something I’m not thinking of, and like other words, it of course may be an equivocation in which the speaker seeks to mean different parts of coherent but incompatible definitions.
If “assumption” means “prior”, your statement highlights that my assumptions may be silly for an additional reason than their being false, as read this way it would suggest that my reasoning is not just misaligned from the reality about which I am trying to reason, but that my reasoning isn’t even consistent, with my reasoning dependent on which of several arbitrary reading assumptions I select without considering the external world. As an analogy, not only do I attempt to map cities around mine without looking at them, my maps contradict each other by allocating the same space to different cities in an impossible way. “Prior” is often the intended way to interpret the word “assumption”, so a definition along these lines springs naturally to mind. However, you would have even less justification for this meaning than the previous one.
If “assumption” means “intermediate conclusion that is both reasoned and a basis for other conclusions”, which it sometimes does, then your statement wouldn’t be at all unflattering to me or relevant.
Something that would at first appear to be an equivocation might itself not be logically flawed, but its meaning might be difficult to track. For instance, one could say “Why do you think some people you disagree with are angry?” “Think” and “some” don’t presuppose anything about the reason behind the thought. Knowing that I think you are angry, you would be correct to use a term that encompasses me either assuming or concluding that you are angry, as you don’t know which it is.
“Why do you assume people you disagree with are angry?” depends on being interpreted as “Why do you sometimes think, whether as a prior or an intermediate conclusion, that some people you disagree with are angry?” to seem a grammatical question that does not make unreasonable assumptions. However, in the context of responding and trying to rebut a criticism of your thoughts, your statement only performs the work it is being called on to do if its answer casts doubt on my thinking.
“Why do you assume all people you disagree with are angry?”, “Why do you arbitrarily assume some people you disagree with are angry?”, and “How could you reasonably conclude all people you disagree with are angry?” are questions to which no one could give a reasonable explanation, taking for granted their assumptions. These are powerful rebuttals, if they can be validly stated without making any assumptions.
Though there is no explanation to them, there is an explanation for them, a response to them, and an answer to them, namely: you are making incorrect assumptions.
Here, your statement attempts to borrow plausibility from the true but irrelevant formulation without assumptions from which there is no conclusion that my thinking is flawed, while simultaneously relying on the context to cast it as a damning challenge if true—which a different sentence in fact would be. Your equivocation masks your assumptions from you.
As to why I think that...well, I started with the second paragraph, so we can start there.
It is based on bad pseudo-scientific research designed to prove that people are biased idiots. One of the intended implications, which the research does nothing to address...
(Pejorative) research you disagree with is designed to prove people are (pejorative). Supposed intended implications that you despise, which are so important they aren’t at all addressed...which of course would means that you have no evidence for them...and are confabulating malice both where and as an explanation behind what you do not fully understand or agree with.
You apparently sometimes assume people are angry on minimal evidence and without explaining your reasoning. You did it with me, and I don’t think this is a personal grudge but something you might do with someone else in similar circumstances. In particular, I think you may have tried to judge my emotional state from the on topic content and perhaps also style of my writing. I understand that in some cases, with some people, such judgments are accurate. I don’t think this is such a case, and I don’t think you had any good evidence or good explanation to tell you otherwise.
I think that you had some kind of reason for thinking I was angry, but that you left it unstated, which shields it from critical analysis.
The way I read your statements, you assumed I was angry (based presumably on unstated reasoning) and drew some conclusions from that, rather than asserting I was angry and arguing for it. E.g. you suggested that I got angry and it was causing me to do it wrong. (BTW that looks to me like, by your standards, stronger evidence that you were angry at me than the evidence of my anger in the first place. It is pejorative. I suspect you actually meant well—mostly—but so did I.)
(Pejorative) research you disagree with is designed to prove people are (pejorative).
People can dislike things without being angry. Right? If you disagree with me about the substance of my criticisms, ad hominems against my emotions are not the proper response. Right?
I think that the between-subjects versions of the conjunction fallacy studies actually provide stronger evidence that people are making the mistake. For instance*, one group of subjects is given the description of Linda and asked to rank the following statements in order, from the one that is most likely to be true of Linda to the one that is least likely to be true of her:
a) Linda is a teacher in elementary school b) Linda works in a bookstore and takes Yoga classes c) Linda is a psychiatric social worker d) Linda is a member of the League of Women voters e) Linda is a bank teller and is active in the feminist movement f) Linda is an insurance salesperson
A second group of subjects does the same task, but statement (e) is replaced by “Linda is a bank teller.”
In this version of the experiment, there should be no ambiguity about what is being asked for, or about the meaning of the words “likely” or “and”—the subjects can understand them, just as statisticians and logicians do. But ‘feminist bank teller’ still gets judged as more likely than ‘bank teller’ relative to the alternatives; that is, subjects in the first group give statement (e) an earlier (more likely) ranking than subjects in the second group.
Some people don’t like the between-subjects design because it’s less dramatic, and you can’t point to any one subject and say “that guy committed the conjunction fallacy”, but I think it actually provides more clear-cut evidence. Subjects who saw a conjunctive statement about Linda being a bank teller and a feminist considered it more likely than they would have if the statement had not been a conjunction (and had merely included “bank teller”). This isn’t a trick, it’s an error that people will consistently make—overestimating the likelihood that a very specific or rare statement applies to a situation, even considering it more likely than they would have considered a broader statement which encompasses it, because it sounds like it fits the situation.
* This study design has been used, with these results, I think in Kahneman and Tversky’s original paper on representativeness. But I didn’t take the time to find the paper and reproduce the exact materials used, I just googled and took the 6 statements from here.
Looking at some of this, I wonder if people are biased towards thinking that the more concrete statement is more likely? Somehow, in my mind, “feminist” is more abstract than “feminist book keeper”. The latter seems closer to being a person, whereas the former seems to be closer to a concept. The more descriptive you are about a subject, the more concrete it sounds, and thus the more likely it is, because it sounds closer to reality. The less descriptive, the more abstract it sounds, and therefore the less likely it is, because it sounds more hypothetical or “theoretical”.
Of course, the more descriptive account is going to have more conjunctions, and therefore has lesser or the same probability. I just wonder if this has been taken into account.
This post is pretty incoherent, but I was able to get from it that you had done basically no research on the conjunction fallacy. So here, have a random paper.
Fallacies are not hard and fast rules about the way people think. The conjunction fallacy is an example of something people SOMETIMES do wrong when thinking, and saying that it only happens in specific situations makes it not exist is like saying ad hominem attacks don’t exist because you’ve never used one.
I think it’s worthwhile that you brought this up (and I’d love to see a discussion on the same issue of questionable phrasing in the Wason selection test, where I suspect your criticism might be valid), but I don’t think you’re right.
I mean, yes, most people would get the [cancer cure/cancer cure+flood right]. It’s easy. But the example of Linda the [bank teller/bank teller+active feminist] does have exactly the same logical structure. And people get it wrong because of the way they’re used to using language without thinking very hard in this context, yes.
Kinda like the way that people can fall for “if all X’s are A, B, and C, then anything that’s A, B, and C must be an X,” but not “if all dalmations have spots, then anything with spots is a dalmation”. Cuz we’ve got this screwy verb “to be” that sometimes means ‘is identical with’, sometimes ‘is completely within the set of”, sometimes ‘is one example of something with this property’ etc etc, and we have to rely on concrete examples oftentimes in order to figure out which applies in a particular context.
And maybe this is largely genetic, or maybe humans would reliably come to see the correct answers as obvious if they were raised in an environment where people talked about these issues a lot in a language that was clear about it. I dunno.
But I do know that I have run in to problems following the same pattern as the Linda the [bank teller/bank tell+active feminist] “in the wild”, and made mistakes because of them. I have created explanations and plans with lots of burdensome details and tricked myself into thinking that made them more likely to be right, more likely to succeed. And I’ve been wrong and failed because of that.
And I don’t think I’m unusual in this.
And I do know that I’ve avoided making this mistake at least several times after I learned about these studies and understood the explanation for why it’s wrong to rate “linda is a bank teller and an active feminist” as more likely.
I mean, yes, most people would get the [cancer cure/cancer cure+flood right]. It’s easy. But the example of Linda the [bank teller/bank teller+active feminist] does have exactly the same logical structure.
But it doesn’t have the same structure as presented to the person being asked. They specifically put in a bunch of context designed so that if you try to use the contextual information provided you get the “wrong” answer (literally wrong, ignoring the fact that the person isn’t answering the question literally written down).
It’s like if you added the right context to the flood one, you could get people to say both. Like if you talked about how a flood is practically guaranteed for a while, and then ask, some people are going to look at the answers and go “he was talking about floods being likely. this one has a flood. the other one is irrelevant”. Why? Well they don’t know or care what you mean by the technical question, “Which is more probable?”. Answering a different question than the one the researcher claims to have communicated to his subjects is different than people actually, say, doing probability math wrong.
Yes, they put in a bunch of irrelevant context, and then asked the question. And rather than answering the question that was actually asked, people tend to get distracted by all those details and make a bunch of assumptions and answer a different question.
Say the researchers put it this way instead: “When we surround a certain simple easy question with irrelevant distracting context, we end up miscommunicating very badly with the subject.
Furthermore, even in ‘natural’ conditions where no-one is deliberately contriving to create such bad communication, people still often run into similar situations with irrelevant distracting context ‘in the wild’, and react in the same sorts of ways, creating the same kind of effects.
Thus people often make the mistake of thinking that adding more details to explanations/plans makes them more likely to be good, and thus do people tend to be insufficiently careful about pinning down every step very firmly from multiple directions.”
That’s bending over backwards to put the ‘fault’ with the researchers (and the entire rest of the world), and yet it still leads to the same significant, important real world day-to-day phenomena.
That’s why I think you’re “wrong” and the “conjunction fallacy” does actually “exist”.
Or, to put it another way, “conjunction fallacy exists—y/n” correlates with some actual feature that the territory can meaningfully have or not have, and I’m pretty damn sure that the territory does have it.
However! I would like to talk about how the same criticism might apply to the Wason selection task, and the hypothesis that humans are naturally bad at reasoning about it generally, but have a specialized brain circuit for dealing with it in social contexts.
Check it. Looking at the wikipedia article, and zeroing in on this part:
Which card(s) should you turn over in order to test the truth of the proposition that if a >card shows an even number on one face, then its opposite face is red?
I know I had to translate that to “even number cards always have red on the other side” before I could solve it, thinking, “well, they didn’t specifically say anything about odd number cards, so I guess that means they can be any color on the other side”.
Because that’s the way people would normally phrase such a concept (usually seeking a quick verification of the assumption about it being intended to mean odds can be whatever, if there’s anyone to ask the question).
And we do that cuz English aint got an easy distinction between ‘if’ and ‘if and only if’, and instead we have these vague and highly variable rules for what they probably mean in different contexts (ever noticed that “let’s” always seems to be an ‘inclusive we’ and “let us” always seems to be an ‘exclusive we’? Maybe you haven’t cuz that’s just an oddity of my dialect! I dunno! :P).
And maybe in that sort of context, for people who are used to using ‘if’ normally and not in a special math jargon way, it seems to be more likely to mean ‘if and only if’.
And if the people in the experiment are encouraged to ask questions to make sure they understand the question properly, or presented with the phrasing “even number cards always have red on the other side” instead, maybe in that case they’d be as good at reasoning about evens and odds and reds and browns on cards as they are at reasoning about minors and majors and cokes and beers in a bar.
But that seems like an obvious thing to check, trying to find a way of phrasing it so that people get it right even though it’s not involving anything “social”, and I’d be surprised if somebody hasn’t already. Guess I oughta post my own discussion topic on that to find out! But later. Right now, I’ma go snooze.
The thing that bemuses me most at the moment is how bad I am at predicting the voting response to a post of mine on LW. Granted that I and most people upvote and downvote to get post ratings to where we think they should be, rather than rating them by what we thought of them and letting the aggregate chips all where they may, I still can’t do well predicting it (that factor should actually be improving my accuracy).
This post of yours is truly fantastic and a moral success; you avoided the opportunities to criticize flaws in the OP and found a truly interesting instance of when an argument resembling that in the OP would actually be valid.
Granted that I and most people upvote and downvote to get post ratings to where we think they should be,
(This factor applies most significantly to mid-quality contributions. There is a threshold beyond which votes tend to just spiral off towards infinity.)
I think this is the only example I have seen of a negative infinity spiral on LW. The OP shouldn’t be too discouraged, if what you are saying is right (and I think it is). Content rated approaching negative infinity isn’t (adjudged) that much worse than content with a high negative rating.
In a way, this post is correct: the conjunction fallacy is not a cognitive bias. That is, thinking “X&Y is more likely than X” is not the actual mistake that people are making. The mistake people make, in the instances pointed out here, is that they compare probability using the representativeness heuristic—asking how much a hypothesis resembles existing data—rather than doing a Bayesian calculation.
However, we can’t directly test whether people are subject to this bias, because we don’t know how people arrive at their conclusions. The studies you point out try to assess the effect of the bias indirectly: they construct specific instances in which the representativeness heuristic would lead test subjects to the wrong answer, and observe that the subjects do actually get that wrong answer. This isn’t an ideal method, because there could be other explanations, but by conducting different such studies we can show that those other explanations are less likely.
But how do we construct situations with an obviously wrong answer? One way is to find a situation in which the representativeness heuristic leads to an answer with a clear logical flaw—a mathematical flaw. One such flaw is to assign a probability to the event X & Y greater than the probability of X. This logical error is known as the conjunction fallacy.
(Another solution to this problem, incidentally, is to frame experiments in terms of bets. Then, the wrong answer is clearly wrong because it results in the test subject losing money. Such experiments have also been done for the representativeness heuristic)
I like this post because it exposes some of my own faulty thinking.
I find myself less likely to look in to what David Deutsch says about anything because of the manner you present your ideas. This isn’t fair, maybe you’re misrepresenting what he actually says. Maybe he explains it in a less grating fashion. Whatever his actual thoughts, it’s not fair to him that I’m discounting what he has to say...and yet, I can’t help it!
Very similar to point #1, I had a very difficult time reading all of your post because of things like:
This is false and misleading. It is based on bad pseudo-scientific research designed to prove that people are biased idiots.
… To me, the language of this sentence seems combative and off-putting, and in fact much of the post does. I wasn’t able to put much thought into what you had to say because of this. The attitude conveyed seems to be “Haha, you guys are so dumb for believing the conjunction fallacy exists!”.
I should be able to read past the attitude it felt like I was getting from your writing and understand the points you were trying to make...but I confess it was very difficult to do so!
And finally (this is supposed to be item #3, but the software makes it a #1...)
The conjunction fallacy does not exist, as it claims to, for all X and all Y.
Does it? I was never under the impression that this was what the conjunction fallacy showed. I may be wrong, but it doesn’t matter to my broken mind. This was another strike against your post, and I wish it hadn’t made it more difficult to read the rest of it. You could have expanded on why you felt the conjunction fallacy made such a sweeping generalization, but you didn’t and it made it harder for me to read.
Deutsch’s The Fabric Of Reality, the only book of his I’ve read, is very well-written and has much that LessWrongers would agree with or find interesting, though I disagree with some of it. The version of Deutsch’s ideas that curi is presenting seems to me utterly wrong-headed. This suggests to me that either Deutsch has suffered some catastrophic failure of reasoning (possibly due to severe brain injury) in the intervening decade-and-a-half, or that curi is misrepresenting his ideas.
Any specific details? Why do you think I’m misrepresenting his ideas? Do you know anything about my relationship with him? Or about how much I’ve studied his ideas, compared to how much you have? Can you point out anything I said which contradicts anything from FoR?
It seems to me you are claiming the authority of having read his book, to know what he thinks. In this regard you are outclassed. And as to rational argument … you don’t give any.
One reason I don’t edit out “combative” language is I don’t interpret it as combative or emotional. It is the literal truth—at least least my best guess at such. I don’t read things in the emotional way that some people do, and I’m not other-people-oriented, and I don’t want to be, so it’s hard to remember the parochial and arbitrary rules others want me to follow to sound nice to them. And I don’t mind if the people too biased to read past style issues ignore me—I consider it a benefit that they will self-select themselves for me like that.
I don’t want to or like to spend my mental effort remember other people’s biases in order to circumvent them. And I’m content with the results. The results I’ve gotten here are not representative (that is, I don’t offend everyone. I have control over that). And the assumptions you’re probably making about my goals here (required to judge if I got what I wanted, or not) are not correct either.
Does it?
It’s supposed to be a universal issue (which happens some significant portion of the time, not every time). That’s the only way it can demonstrate that it has anything to do with X&Y vs Y. If it’s merely tricks of the type I describe, the X&Y vs Y part is irrelevant and arbitrary; random things of the form X&Y vs Y in general won’t work for it, and you could easily do it with another logical form.
I find myself less likely to look in to what David Deutsch says about anything because of the manner you present your ideas. This isn’t fair, maybe you’re misrepresenting what he actually says. Maybe he explains it in a less grating fashion.
If you read his books you will find the style less grating.
And I don’t mind if the people too biased to read past style issues ignore me—I consider it a benefit that they will self-select themselves for me like that.
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
And the assumptions you’re probably making about my goals here (required to judge if I got what I wanted, or not) are not correct either.
I don’t think I’m making any assumptions about this. What are your goals? I can think of several off the top of my head:
Convince others of your ideas.
Get feedback on your ideas to help you further refine them.
Perform a social experiment on LW (use combative tone, see results)
Goal #3 and related “trollish” goals are the only ones I can think of off the top of my head that benefit from a combative tone.
I mean...why write at all if you don’t want most people to look at the actual content of your writing?
These aren’t attacks in question form. These are honest questions.
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
I personally enjoyed confirming that this thread sent curi below the threshold at which anyone who is not a prominent SIAI donor can make posts. Does that count?
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
Haven’t you noticed some people said positive things in comments?
You say you aren’t making “any assumptions” but then you give some. As an example, I’m not here to do (1) -- if it happens, cool, but it’s not my goal. That was, apparently, your assumption.
I mean...why write at all if you don’t want most people to look at the actual content of your writing?
The halfway sane people can read the content. I haven’t actually said anything I shouldn’t have. I haven’t actually said anything combative, except in a few particular instances where it was intentional (none in this discussion topic). I’ve only said things that I’m vaguely aware offend some silly cultural biases. The reason I said “biased idiots” is not to fight with anyone but because it clearly and accurately expresses the idea I had in mind (and also because the concept of bias is a topic of interest here). There is a serious disrespect for people as rational beings prevalent in the culture here.
Do people really get offended because of language? Maybe sometimes. I think most of it is not because of that. It’s the substance. I think the attitude behind the conjunction fallacy, and the studies, and various other things popular here, is anti-human. It’s treating humans like dirt, like idiots, like animals. That’s false and grossly immoral. I wonder how many people are now more offended due to the clarification, and how many less.
These aren’t attacks in question form. These are honest questions.
That’s what I thought. And if I didn’t, saying it would change nothing because asserting it is not an argument.
Unless, perhaps, it hadn’t even occurred to me at all. I think there’s a clash of world views and I’m not interested in changing mine for the purpose of alleviating disagreement. e.g. my writing in general is focussed on better people than the ones who it wouldn’t occur to to consider that questions weren’t attacks. They are the target audience I care about, usually. You’re so used to dealing with bad people that they occupy attention in your mind. I usually don’t want them to get attention in mind. There’s better people in the world, and better things to think about.
There is a serious disrespect for people as rational beings prevalent in the culture here.
People aren’t completely rational beings. Pretending they are is showing more disrespect than acknowledging that we have flaws.
It’s treating humans like dirt, like idiots,
Not like dirt. Not like idiots. As though we sometimes act as idiots, yes. Because we sometimes do. You seem to have trouble confusing “always” and “sometimes”.
like animals.
We are, in fact, animals. We’re a type of ape that has a very big brain, as primates go. We have many differences from the rest of the animals, but the similarities with other animals should be clear enough that it would be a severe mistake to not call us animals.
That’s false and grossly immoral.
Ah, now this is a very honest and revealing statement. These are two separate issues, of course. Statements can be true or false, and actions can be grossly immoral, and not the reverse. Yet they’re linked in your mind. Why is that? Did you decide theses statements(*) were false and thus holding them grossly immoral (or to be charitable, likely to lead to grossly immoral actions), or were you offended at the statements and thus decided they must be false?
I wonder how many people are now more offended due to the clarification, and how many less.
Not offended. Just saddened.
(*) Rather, your misunderstanding of them. “People can think P(X&Y) > P(X)” is not the same as “People always think P(X&Y) > P(X) for all X and Y”. Yes, of course special X and Y have to be selected to demonstrate this. There is a wide range between “humans aren’t perfect” and “humans are idiots”. The point of studies like this is not to assert humans are idiots, or bad at reasoning, or worthless. The point is to find out where and when normal human reasoning breaks down. Not doing these studies doesn’t change the fact that humans aren’t perfectly rational, it merely hides the exact contours of when our reasoning breaks down.
The possible (asserted, not shown) existence of an agenda might indeed have something to do with what studies were done and how the interpreters presented them. The morality or immorality of this agenda is what is a separate issue.
I think the attitude behind the conjunction fallacy, and the studies, and various other things popular here, is anti-human. It’s treating humans like dirt, like idiots, like animals. That’s false and grossly immoral.
This seems terribly backwards to me. In order to become more rational, to think better, we must examine our built-in biases. We don’t do and use such studies simply to point out the results and laugh at “idiots”; we must first understand the problems we face, the places where our brains don’t work well, so that we can fix them and improve ourselves. Utilizing the lens that sees its flaws is inherently pro-human. (Note that “pro-human” should be taken to mean something reasonable like “wanting humans to be the best that they can be,” which does involve admitting that humans aren’t at that point right now. That is only “anti-human” in the sense that Americans who want to improve their country are “anti-America.”)
This seems terribly backwards to me. In order to become more rational, to think better, we must examine our built-in biases.
I wasn’t talking about examining biases that exist. I was talking about making up biases, and finding ways to claim the authority of science for them.
These studies, and others, are wrong (as I explained in my post). And not randomly. They don’t reach random conclusions or make random errors. All the conclusions fit a particular agenda which has a low opinion of humans. (Low by what standard? The average man-on-the-street American Christian, for example. These studies have a significantly lower opinion of humans than our culture in general does.)
We don’t do and use such studies simply to point out the results and laugh at “idiots”;
Can you understand that I didn’t say anything about laughing? And that replies like this are not productive, rational discussion.
People who demean all humans are demeaning themselves too. Who are they to laugh at?
Utilizing the lens that sees its flaws is inherently pro-human.
Do you understand that I didn’t say that the “lens that sees its flaws” was the anti-human?
we must first understand the problems we face, the places where our brains don’t work well,
Can you understand that the notion that our brains don’t work well, in some places, is itself a substantive assertion (for which you provide no argument, and for which these bad studies attempt to provide authority)?
Can you see that this claim is in the direction of thinking humans are a less awesome thing than they would otherwise be?
I’ll stop here for now. If you can handle these basic introductory issues, I’ll move on to some further explanation of what I was talking about. If you, for example, try to interpret these statements as my substantive arguments on the topic, and complain that they aren’t very good arguments for my initial claim, then I will stop speaking with you.
These studies, and others, are wrong (as I explained in my post). And not randomly. They don’t reach random conclusions or make random errors. All the conclusions fit a particular agenda which has a low opinion of humans. (Low by what standard? The average man-on-the-street American Christian, for example. These studies have a significantly lower opinion of humans than our culture in general does.)
Your post only served to highlight your own misunderstanding of the topic. From this, you’ve proceeded to imply an “agenda” behind various research and conclusions that you deem faulty. To be blunt: you sound like a conspiracy theorist at this point.
Can you understand that the notion that our brains don’t work well, in some places, is itself a substantive assertion (for which you provide no argument, and for which these bad studies attempt to provide authority)?
Also, at this point it sounds like you’re throwing out all studies of cognitive bias based on your problems with one specific area.
Can you see that this claim is in the direction of thinking humans are a less awesome thing than they would otherwise be?
Having a low opinion of default human mental capabilities does not necessarily extend to having a low opinion of humans in general. Humans are not static. Having an undeservedly high opinion of default human mental capabilities is a barrier to self-improvement.
You constantly attack trivial side-points, and peculiarities of wording, putting you somewhere in the DH3-5 region, not bad but below what we are aiming for on this site, which is DH6 or preferably Black Belt Bayesian’s new DH7 (deliberately improving you opponents arguments for them, refuting not only the argument but the strongest thing which can be constructed from its corpse).
Can you understand that I didn’t say anything about laughing? And that replies like this are not productive, rational discussion.
This is a good example. Yes, he used the word laugh and you didn’t, but that is not important to the point he was making, nor does it significantly change the meaning or the strength of the argument to remove it. If wish to reach the higher levels of the DH you should try to ignore things like that and refute the main point.
Do you understand that I didn’t say that the “lens that sees its flaws” was the anti-human?
This is another example. You never used the phrase ‘lens that sees its flaws’ but since the heuristics and biases program is absolutely central to the lens that sees its flaws and you accused that of being anti-human his central point still stands.
Side-points are not trivial, and they are not unimportant, nor irrelevant (some can sometimes be irrelevant. These aren’t, in my view.)
Every mistake matters. Learning is hard. Understanding people very different than you is hard. Agreeing about which points are trivial, with people who see the world differently, is hard to.
To deal with the huge difficulty of things like learning, changing minds, making progress, it’s important to do things like respect every mistake and try to fix them all, not dismiss some as too small and gloss them over. Starting small is the best approach; only gradual progress works; trying to address the big issues straight away does not work.
Black Belt Bayesian’s new DH7
New? Do you really think that’s a new idea? Popper talked about it long ago.
Yes, he used the word laugh and you didn’t, but that is not important
But I regarded it as important in several ways.
nor does it significantly change the meaning or the strength of the argument to remove it.
But 1) I think it does 2) I think that by rewriting it without the mistake he might learn something; he might find out it does matter; we might have a little more common ground. I don’t think skipping steps like this is wise.
To deal with the huge difficulty of things like learning, changing minds, making progress, it’s important to do things like respect every mistake and try to fix them all, not dismiss some as too small and gloss them over. Starting small is the best approach; only gradual progress works; trying to address the big issues straight away does not work.
You do not take into account time constraints. I have not been keeping count, but discussions with you alone have probably eaten up about 2 hours of my time, quite a lot considering the number of other tasks I am under pressure to perform.
Your approach may work, but in terms of progress made per second, it is innefficient. Consider how long it takes for every mistake:
1) First you have to notice the mistake
2) Then you have to take the time to write about it
3) Then the other person has to take the time to read it
4) Then they have to take the time to explain it to you / own up to it
5) Then they have to pay extra attention while they type, for fear they will make a similar mistake and you will waste their time again
6) Then, everybody else reading the comments will have to read through that discussion, and we really don’t care about such a tiny mistake, really, not at all!.
Is it worth it for the tiny gains?
I think a better approach is to go straight for the central claim. If you and they continue to disagree, consider the possibility that you are biased, and question why their point might seem wrong to you, what could be causing you not to see their point, do you have a line of retreat? (You trust that he will be doing the same, don’t make accusations until you are sure of yourself). If that doesn’t work, you start tabooing words, an amazingly high proportion of the time the disagreement just vanishes. If that doesn’t work you either continue to look for reasons why you might be wrong,, and continue explaining your position more clearly and trying to understand theirs.
You do not nitpick.
It wastes everybody’s time.
New? Do you really think that’s a new idea? Popper talked about it long ago.
If Popper talked about it then why don’t you do it!
If Popper talked about it then why don’t you do it!
The more you’re having a hard time communicating and cooperating with people, the less you should skip steps. So, I will consider the best version in my mind to see if it’s a challenge to my views. But as far as trying to make progress in the discussions, I’ll try to do something simpler and easier (or, sometimes, challenging to filter people).
You do not take into account time constraints.
Rushing doesn’t solve problems better. If people haven’t got time to learn, so be it. But there’s no magic short cuts.
Your approach may work, but in terms of progress made per second, it is innefficient.
I disagree. So many conversations have basically 0 progress. Some is better than none. You have to do what actually works.
Is it worth it for the tiny gains?
Yes. That’s exactly where human progress comes from.
I think a better approach is to go straight for the central claim.
But there’s so much disagreement in background assumptions to sort out, that this often fails. If you want to do this, publish it. Get 100,000 readers. Then some will like it. In a one on one conversation with someone rather different than you, the odds are not good that you will be able to agree about the central claim while still disagreeing about a ton of other relevant stuff.
Sometimes you can do stuff like find some shared premises and then make arguments that rely only on those which reach conclusions about the central claim. But that kind of super hard when you think so differently that there’s miscommunication every other sentence.
People so badly underestimate the difficulty of communicating in general, and how much it usually relies on shared background/cultural assumptions, and how much misunderstanding there really is when talking to people rather different than yourself, and how much effort it takes to get past those misunderstandings.
BTW I would like to point out that this is one of the ways that Popperians are much more into rigor and detail than Bayesians, apparently. I got a bunch of complaints about how Popperians aren’t rigorous enough. But we think these kinds of details really matter a lot! We aren’t unrigorous, we just have different conceptions of what kind of rigor is important.
If you and they continue to disagree, consider the possibility that you are biased, and question why their point might seem wrong to you, what could be causing you not to see their point, do you have a line of retreat?
Yes I do that. And like 3 levels more advanced, too. Shrug.
If that doesn’t work, you start tabooing words, an amazingly high proportion of the time the disagreement just vanishes.
You’re used to talking with people who share a lot of core ideas with you. But I’m not such a person. Tabooing words will not make our disagreements vanish because I have a fundamentally different worldview than you, in various respects. It’s not a merely verbal disagreement.
5) Then they have to pay extra attention while they type, for fear they will make a similar mistake and you will waste their time again
Fear is not the right attitude. And nor is “extra attention”. One has to integrate his knowledge into his mind so that he can use it without it taking up extra attention. Usually new skills take extra attention at first but as you get better with them the attention burden goes way down.
They are the target audience I care about, usually. You’re so used to dealing with bad people that they occupy attention in your mind. I usually don’t want them to get attention in mind. There’s better people in the world, and better things to think about.
You say you aren’t making “any assumptions” but then you give some. As an example, I’m not here to do (1) -- if it happens, cool, but it’s not my goal. That was, apparently, your assumption.
I didn’t say I was making any of those assumptions. In fact, I specifically said I wasn’t assuming any of them.
This basically reads as, “The point of writing is for other people to divine what’s inside my head. There’s no point in me actually trying to communicate this clearly. I’ll write whatever I feel like and the burden is on them to figure it all out.”
Using lots of conclusory and inflammatory language makes your writing vastly less compelling. It’s not a far cry from saying, “Everyone who disagrees with me a is a doo-doo head!” Writing to convince people has rules and forms you need to follow, because if you don’t, you will fail at convincing people. If your goal is merely the satisfaction of your own ego, I suppose this isn’t a problem.
In other words, if you have no desire to actually communicate successfully or persuade people, by all means, keep doing what you’re doing.
This post that you have excreted has essentially zero content. You restate the core idea behind the representativeness heuristic repeatedly, and baldly assert that there are good reasons for people having the intuitions that they do, that people are “using valuable real life skills” when they give incorrect answers to questions. No ones arguing that it hasn’t been an evolutionarily useful heuristic, just that it happens to be incorrect from time to time. I cannot figure out where in your post you actually made an argument that the conjunction fallacy “doesn’t exist”, and I am overjoyed that you no longer have the karma to make top-level posts.
The point of the conjunction fallacy is that, under specific circumstances, people’s ability to estimate probability will reliably misfire, i.e. people are biased.
This does not require that people’s estimates always misfire. You seem to have some odd personal stake in this; the conjunction fallacy is not some great criticism of people’s moral worth. It is merely an observation that the human brain does not function optimally, much like the existence of blind spots.
I can think of two ways this article represents a coherent point: first, it is pointing out that we do not always miscalculate. This is an obvious fact and does not conflict with the bias in question.
Second, you might be complaining that the circumstances under which people exhibit this bias are so unlike reality a to be irrelevant. You provide no evidence and virtually no argument to this effect. Indeed, people make off-the-cuff estimates without consulting the laws of probability all the time. If your point is that it is of no practical consequence, you did not get anywhere near demonstrating that.
Also, you said that a study is unbiased only if it confirms your view, then asked for unbiased studies contradicting your view. I hope the problem there is now apparent to you.
The first one is flawed, IMO, but not for the reason you gave (and I wouldn’t call it a ‘trick’). The study design is flawed. They should not ask everyone “which is more probable?” People might just assume that the first choice, “Linda is a bank teller” really means “Linda is a bank teller and not active in the feminist movement” (otherwise the second answer would be a subset of the first, which would be highly unusual for a multiple choice survey).
The Soviet Union study has a better design, where people are randomized and only see one option and are asked how probable it is.
Yes. The bank teller example is probably flawed for that reason. When real people talk to each other, they obey the cooperative principle of Grice or flout it in obvious ways. A cooperative speak would ask whether Linda was a bank teller who was active in the feminist movement, or if Linda was bank teller regardless of whether she was active in the feminist movement (showing that one answer included the other).
For one thing, dear readers, I offer the observation that most bank tellers, even the ones who participated in anti-nuclear demonstrations in college, are probably not active in the feminist movement.
Yet Eliezer is strangely tossing out a lot of information. We don’t just know that she is a bank teller and anti-nuclear, we also know: “She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice.”
At least where I went to college, the conditional probability that someone was a feminist given that they were against nuclear power, majored in philosophy, and were concerned with discrimination and social justice was probably pretty damn high.
You might, for instance, use this moment as an opportunity to signal that you aren’t a troll or indicate why you shouldn’t be banned, as I fear this fate awaits you if you continue on your present course.
That, of course, depends on your reason—but it could be that you will receive information on how better to achieve your goals, indication that they are impossible or prohibitively costly to accomplish (thus saving you time and effort), the good will of those interacting with you, confirmation that you are acting effectively....
Your goals may conflict with this, however. For instance, if they are to reap discord and vitriol, or to test the community, or end civilization, or to hide your goals, or bring low the proud, or find a worthy sparring partner, you may not benefit from such transparency.
Probability measure is spread pretty thin in these parts.
Awesome. This needs to be one of the first lines a protagonist hears after entering a cantina in a science fiction Space Opera...maybe rationalist fanfiction like HPATMOR.
Post removed from main and discussion on grounds that I’ve never seen anything voted down that far before. Page will still be accessible to those who know the address.
It seems vote scores of EY posts about site-related actions and discussion turn into agree/disagree, so based on that the consensus is this was a good action.
But I would like to have a comment here at least voicing the opinion (my opinion on this coincidentally) that just displaying a low vote score is enough to warn people that don’t want to read things that get low vote scores.
I agree. ”...on grounds that I’ve never seen anything voted down that far before...” isn’t literally true in the strictest possible sense. Implicitly, it means the editor also did not think of any sufficiently compelling counter-arguments upon reading the piece.
So unless removal for being downvoted by orders of magnitude more than anything else (including those previously removed, to avoid evaporative cooling) was the rule before the editor removed the post, the content of the post had a good deal to do with its removal.
I am not saying it is the wrong decision because of this, but I would prefer the formulation: “Post removed from main and discussion on grounds that I’ve never seen anything voted down that far before, and it seems to me the downvotes are probably justified.”
It’s important not to imply the post’s removal wasn’t a matter of editorial judgement, since it was, not that there’s anything wrong with that.
I have no issue with people making editorial judgements provided they are upfront about it. On the face of it, EY is saying he removed the post purely on the basis of popularity, as though he is deferring judgement to others. You assert he wasn’t and that no-one would be misled but if you just leave things implied there is a lot of scope for misunderstanding and it is a way of not taking responsibility.
So I’m curious. Pretending for a moment that you have actually approached this situation rationally, you must have had some significant amount of evidence that humans do not commit any kind of conjunction fallacy (except when “tricked”).
Given that you are so confident in this contention that you feel you can safely dismiss every one of the significant number of corroborating scientific studies a priori as “wrong and biased”, I’m wondering where got this evidence that humans never implement faulty probability estimation techniques except when being tricked, and why nobody else seems to have seen it.
you must have had some significant amount of evidence that humans do not commit any kind of conjunction fallacy
Nope. I don’t have any evidence you don’t have. I don’t think the disagreement is primarily about evidence but philosophical understanding and explanation.
Given that you are so confident in this contention that you feel you can safely dismiss every one of the significant number of corroborating scientific studies
I’ve looked at some. Found they weren’t even close to meeting the proper standards of science. I’ve run into the same problem with other kinds of studies in various fields before. I have general purpose explanations about bad studies. They apply. And no one, on being challenged, has been able to point me to any study that is valid.
I’m wondering where got this evidence that humans never implement faulty probability estimation techniques except when being tricked
I have explanations that probability estimating isn’t how people think in the first place, in general. See: Popperian epistemology.
nobody else seems to have seen it.
You mean, nobody else here seems to have seen it, on a site devoted to the idea of applying probability math to human thought.
I wonder what the downvote pattern here would be if people wouldn’t see the existing downvotes in the −20 range along with the post, but would actually need to read the text and individually decide whether to downvote, upvote or abstain with no knowledge of the other votes.
What if the displayed karma, at least to anyone else other than the author, would be shown as just 0 or −1 if it was any amount into the negative? Then there wouldn’t be an invitation to join the happy fun massive downvote pile-on party without even reading the post.
My model predicts the opposite of what you seem to be suggesting, actually. Downvotes are a limited resource, even if the limit isn’t a restrictive one, so I expect that at least some users will decline to downvote in order to conserve that resource if they see that the point has been made. Even without that particular pressure, it seems likely to me that some people just won’t bother to downvote if it’s already clear that something isn’t approved of. The latter is a bit like the bystander effect—if one knows that someone else has already taken care of the problem, one won’t feel much pressure to do so as well.
We’re probably both right, in a sense—when this kind of question has come up before, there have tended to be a mix of responses, so both things are probably happening. I do think that on net we’d see more downvotes if the existing balance wasn’t public until people voted, though, even if ‘abstain’ counted as a vote so that people who were just curious weren’t forced to weigh in.
Yeah, I don’t actually have a very solid anticipation on what would happen, though I would be more surprised if the result was even more downvotes. Still, you do make a case why it might well be.
I’m not sure I’d mind a lot of people reading something and individually deciding to downvote it, that would just do what the score is originally supposed to and sum a lot of individual assessments. What I don’t like is getting the pile-on effect from both showing the current score and the downvote options. It makes downvoting stuff already in the negative further feel icky. It also makes it feel a lot more symbolic to upvote something you see far in the negative or downvote something far in the positive.
And I didn’t even know downvotes cost something. I see a mention of a cap on an old thread, but I don’t see my downvote mana score anywhere in the UI.
What I don’t like is getting the pile-on effect from both showing the current score and the downvote options. It makes downvoting stuff already in the negative further feel icky. It also makes it feel a lot more symbolic to upvote something you see far in the negative or downvote something far in the positive.
I’m not sure I’m parsing this properly—it sounds to me like you’re saying that seeing a very low score along with the option to lower it further makes you less likely to actually downvote, because it feels like ganging up on someone, and more likely to upvote it, compared to not seeing the existing score?
Does it change anything if you specifically focus on the fact that upvotes and downvotes are supposed to mean “I want to see more/less like this” rather than “I think this is good/bad” or “I think this is right/wrong” or “I like/dislike the point being made”?
And I didn’t even know downvotes cost something. I see a mention of a cap on an old thread, but I don’t see my downvote mana score anywhere in the UI.
The downvote limit isn’t displayed anywhere, until you hit it. (And I’m not sure you get a proper notification then, either.) But the algorithm is that you can make four times as many downvotes as your karma score, and then you can’t make any more downvotes until your karma goes up. It’s not something that most people will ever run into, I suspect, but knowing that it’s possible seems likely to have an effect—and possibly a stronger one given the uncertainty of the size of one’s buffer, in fact.
I’m not sure I’m parsing this properly—it sounds to me like you’re saying that seeing a very low score along with the option to lower it further makes you less likely to actually downvote, because it feels like ganging up on someone, and more likely to upvote it, compared to not seeing the existing score?
Pretty much. And yeah, looks like I’m saying that I like behave the opposite way that I expect the most of the group to behave. But then again, I’m imagining both reacting more to a big, shiny sign saying “Look! This is what Us People are expected to think about this thing!” than the actual thing itself. I’m just generally reacting to anything with that sort of mindless, mob-like tone with “screw You People.”
Does it change anything if you specifically focus on the fact that upvotes and downvotes are supposed to mean “I want to see more/less like this” rather than “I think this is good/bad” or “I think this is right/wrong” or “I like/dislike the point being made”?
It’s easy enough to imagine that when I’m the one doing or receiving the voting. When I’m seeing this stuff unfold elsewhere, I need to imagine that the downvoters and the downvotees are also seeing things in that way. Given that this requires extra thought, and the simple, instant interpretation is upvote good, downvote bad, I’m not having a very easy time imagining that’s the case.
Also given how human group psychology works, I’m not sure there’s that much of a difference there. Saying that you’d like to see less of the sort of thing the author implicitly thought people are interested in seeing by posting it doesn’t sound like that much less of a rebuke than just saying the thing was bad or wrong. You could probably make a case for it being a bigger rebuke, since it implies the author is too incompetent to figure out what’s good content, and the social inferior whom the downvoters can pass judgment on, rather than someone with equal social standing having a dispute with the others on whether something is good and right or bad and wrong.
Does it change anything if you specifically focus on the fact that upvotes and downvotes are supposed to mean “I want to see more/less like this” rather than “I think this is good/bad” or “I think this is right/wrong” or “I like/dislike the point being made”?
This is at most only true in one respect: many implementors of the system primarily had such intent.
Count me among those who saw downvoting this as pointless. It was already downvoted enough to be removed from the main page, and downvotes are limited. Also, all else equal, I feel bad when engaging in piling on. This post is what I think of as ideal: it responds to the OP by finding the limited case for which an argument resembling that of the OP would be right, so the OP can see the contrast without feeling attacked.
I read the post in the RSS feed, clicked through to downvote, and then didn’t because I thought “s/he’s suffered enough”. Arguably, with a monumentally wrong post like the OP, we might see more people downvoting if they didn’t know others had.
Before reading the post, I skimmed the comments. I see that the author’s replies to comments are clear evidence of failure. I don’t know, or care, if it’s trolling or merely defensiveness (I guess the latter is by far more likely).
As for the claim that the particular questions are designed to elicit wrong answers (and yes, the wrong answers are objectively wrong), so as to sex up the results, that’s obviously true. That obviousness is probably the reason for the initial downvotes.
I think the author has completely misunderstood the intent of “conjunction fallacy”. It’s not that people are more prone to favor a conjunction as more plausible in general, just because it’s a conjunction, it’s that it’s obviously wrong to consider a conjunction more likely than any one of its components.
I don’t know about front page posts, but a lot of comments stop getting downvotes at −2 or −3. I think they turn invisible for some people by default, so that’s why, rather than people thinking the point is already made.
On Hacker News, early voting trends usually continued to inspire more votes of the same kind. Except someones when someone would comment “Why is this getting downvoted” which could reverse it. They capped things to −4 points minimum now. Upvote orgies are still common. Some are due to quality, sure—some things deserve a lot of upvotes—but not all of them.
Hacker News seems to have a definite idea on curbing downvote sports by capping the point display when things go to negative. This might be a good idea. Another intentional thing they have is having no downvoting at all on the toplevel posts, but their posts are more links and short questions than original articles, so this might not fit LW that well.
Spurious upvoting is probably also going on, but I don’t see it as problematic as downvotes since it doesn’t come with a direct effect of anonymously telling people that they don’t belong on the forum.
Others have addressed most of the issues but I’d like to address another issue.
Claims that have explanatory reasons are better than claims that don’t.
This isn’t at all relevant. In the Soviet-Poland experiment the two groups were asked to give a probability estimate. Whether they are better in some sense is a distinct claim about whether or not the probability of one or the other is higher. Even if one prefers claims with explanatory power to those that don’t, this doesn’t make those claims more probable. Indeed, the fact that they are easier to falsify in the Popper sense can be thought of as connected in some way to the fact that in a Bayesian sense they are more improbable.
But you can’t argue that because the class of hypotheses is preferred people should be willing to assign a higher probability to them.
“Probability estimate” is a technical term which most people don’t know. That isn’t a bias or a fault. When asked to give one, especially using a phrase like “which is more probable” which is ambiguous (often used in non-mathematical sense. whereas the term “probability estimate” isn’t), they guess what you want and try to give that instead. Make sense so far?
But this is nullified in the colour die experiment of Tversky and Kahneman, as detailed in the Conjunction Fallacy article, as linked by HonoreDB in a comment in this very topic, as so far unaddressed by you.
Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequences of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you chose appears on successive rolls of the die. Please check the sequence of greens and reds on which you prefer to bet.
RGRRR
GRGRRR
GRRRRR
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event. There was actual money on the line if the students chose a winning sequence. There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving “probability”.
I find your claim that people interpret “which is more probable” as asking something other than which is more probable to be dubious at best, but it completely fails to explain the above study result.
I’m not calling it anything, and I didn’t use the word “bias” once. Perhaps people who have studied probability theory would have fared better on the task. But this is kind of the point, isn’t it? The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
Nobody is saying that this heuristic isn’t most likely more than adequate in the majority of normal day-to-day life. You seem to be conflating the claim that people use some kind of representativeness heuristic, rather than correct probability analysis, in probability estimation problems—which appears to be true, see above—with some kind of strawman claim you have invented that people are stupid idiots whose probability estimation faculties are so bad they reliably get every single probability estimation problem wrong every time, lolmorons. People here are posting in defense of the first one, you keep attacking as if they are defending the second one.
The hypothesis
a representativeness heuristic was easier for evolution to implement than a full, correct model of probability theory, and it did the job well enough to be a worthwhile substitute, thus humans will incorrectly choose representative cases rather than more probable cases when attempting to choose more probable cases
predicts the results of all the studies strongly. Your contrary hypothesis, which to me appears to be along the lines of
humans DO implement correct probability (i.e. they do not implement it incorrectly) but psychologists are determined to make humans look stupid in order to laugh at them, thus they craft experiments that exploit ambiguity over “probability” to make it look like humans use a representativeness heuristic when they in fact do not
predicts the results of some of the studies, but not the red/green die rolling one.
So now we are waiting for your post hoc justification of how test subjects who knew they were seeking a strictly more probable case (because they were hoping to win money) still got the wrong answer.
The issue is that when a person intuitively tries to make a judgement on probabilities
Not intuitively. They’re in a situation outside their daily routine that they don’t have good intuition about. They are making a judgment not because they think they know what they are doing but because they were asked to.
Can you see how artificial situations are not representative of people’s ability to cope with life? How putting people in artificial situations designed to fool them is itself a biased technique which lends itself to reaching a particular conclusion?
How putting people in artificial situations designed to fool them is itself a biased technique which lends itself to reaching a particular conclusion?
(emphasis mine)
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no “trickery” here.
To avoid more post-hockery, please propose a concrete experiment testing the conjunction fallacy, but controlling for “tricks” inherent in the design.
Not all the experiments are the same thing, just because they reached the same conclusion.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
The paper doesn’t describe the test conditions (that makes it unscholarly. scientific papers are supposed to have things like “possible sources of error” sections and describe their experimental procedure carefully. It doesn’t even say if it was double blind. Presumably not. If not, an explanation of why it doesn’t need to be is required but not provided.). There’s various issues there, e.g. pressure not to ask questions could easily have been present.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well. It has a consistent bias: people are better at real life than artificial situations designed in such a way that people will fail.
EDIT: Look at this sentence in the 1983 paper:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
See. Biased. Specifically designed to fool people. They directly admit it. You never read the paper, right? You just heard a summary of the conclusions and trusted that they had used the methods of science, which they had not.
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
See. Biased. Specifically designed to fool people.
What that says is that the test in question addresses only one of two obvious questions about the conjunction fallacy. It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice. It’s like if they were trying to find out if overdosing on a certain vitamin can be fatal: They’ll give their (animal) subjects some absurd amount of it, and see what happens, and if that turns out to be fatal, then they go trying different amounts to see where the effect starts. (That way they know to look at, say, kidney function, if that’s what killed the first batch of subjects, rather than having to test all the different body systems to find out what’s going on.) All the first test tells you is whether that kind of overdose is possible at all.
Just because it doesn’t answer every question you’d like it to doesn’t mean that it doesn’t answer any questions.
Do you understand that the situation of “someone using his intelligence to try to fool you” and the situation “living life” are different? Studies about the former do not give results about the latter. The only valid conclusion from this study is “people can sometimes be fooled, on purpose”. But that isn’t the conclusion it claims to support. Being tricked by people intentionally is different than being inherently biased.
The point is “people can be fooled into making this specific mistake”, which is an indication that that specific mistake is one that people do make in some circumstances, rather than that specific mistake not being made at all. (As a counterexample, I imagine that it would be rather hard to trick someone into claiming that if you put two paperclips in a bowl, and then added two more paperclips to that bowl, and then counted the paperclips, you’d find three—that’s a mistake that people don’t make, even if someone tries to trick them.)
“Some circumstances” might only be “when someone is trying to trick them”, but even if that’s true (which the experiment with the dice suggests it’s not) that’s not as far removed from “living life” as you’re trying to claim—people do try to trick each other in real life, and it’s not too unusual to encounter other situations that are just as tricky as dealing with a malicious human.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well.
This is exhausting. Whatever this “real life” of yours is, it’s so boring, predictable, and uniform as to be not worth living. Who hasn’t had to adapt to new (yes, even “artificial”) situations before? Who hasn’t played a game? Who hasn’t bet on the outcome of an event?
Propose an experiment already. My patience is waning.
Yes. If there’s no way to test your claim that systemic pseudoscientific practice brought about the conjunction fallacy, how can we know?
I have something like a thirty percent chance of correctly determining whether or not you will judge a given experimental design to be biased. Therefore I surely can’t do it.
We should just believe the best experiment anyone did, even if it’s no good?
I don’t know of any experiment that would be any good. I think the experiments of this kind should stop, pending a new idea about how to do fundamentally better ones.
They’re in a situation outside their daily routine that they don’t have good intuition about. They are making a judgment not because they think they know what they are doing but because they were asked to.
This is not exactly an uncommon situation in day-to-day life.
Your objection seems to boil down to, “Experimental settings are not identical to actual settings, therefore, everything derived from them is useless.”
It seems pretty unlikely that people will miscalculate probability in an experiment with money on the line, but then always calculate it with perfect accuracy when they are not specifically in an experimental setting.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you’d actually be making a point.
I think, in general, they don’t calculate it at all.
They don’t do it in normal life, and they don’t do it in most of the studies either.
In short how is the experimental setting so different that we should completely ignore experimental results?
It’s different in that it’s specifically designed to elicit this mistake. It’s designed to trick people. How do I know this? Well apart from the various arguments I’ve given, they said so in their paper.
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life? Really? Or is it a moral condemnation of the researchers. They’re bad people, so we shouldn’t acknowledge their results?
People try to trick each other all the time. People try to trick themselves all the time. People routinely make informal estimates of what’s likely to happen. These experiments all show that under certain circumstances, people will systematically be wrong. Not one thing you’ve said goes to counter this. Your entire argument appears predicated on the confusion of “all” with “some.”
If you succeed at tricking people, you can get them to make mistakes.
What those mistakes are is an interesting issue. There is no argument the mistakes actually have anything to do with the conjunction fallacy. They were simply designed to look like they do.
If you succeed at tricking people, you can get them to make mistakes.
This pretty much is the conjunction fallacy. Facts presented in certain ways will cause people to systematically make mistakes. If people did not have a bias in this respect, these tricks would not work. It is going to be hard to get people to think, “Billy won the science fair and is captain of the football team” is more likely than each statement separately, because the representativeness heuristic is not implicated.
There is no argument the mistakes actually have anything to do with the conjunction fallacy.
I have no idea what this means, unless it’s saying, “I’m right and that’s all there is to say.” This is hardly a useful claim. There appears to be rather overwhelming argument on the part of most of the commenters on this forum and the vast majority of psychologists that there is not only such an argument, but that it is compelling.
No. Absolutely not. First of all, the Soviet experiment was done (and has been replicated in other versions) with educated experts. Second of all, it isn’t very technical at all to ask for a probability. People learn these in grade school. How do you think you can begin to talk to the Bayesians when you think probability estimate is a technical term? They use far more difficult math than that as parts of their basic apparatus. Incidentally, this hypothesis also is ruled out by the follow-up studies which have already been linked to in this thread that show that the conjunction fallacy shows up when people are trying to bet.
I have to wonder if your fanatical approach to Popper is causing problems in which you think or act that you think that any criticism, no matter how weak, or ill-informed allows the rejection of a claim until someone responds to that specific criticism. This is not a healthy attitude if one is interested in an exchange of ideas.
I agree that these studies don’t necessarily show what they intend to. In fact, I could easily imagine giving a higher probability for X&Y than Y myself… if I were asked about Y first, and if X were evidence that Y is probable that I had not considered. (Of course, after the fact I’d adjust my probability for Y).
Then you’re talking about conditional probability (Y | X), rather than joint probability.
It’s quite possible that if you thought the questions were asking about conditional probability, participants in these studies might have, too. Let’s take the Russia question:
“A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
“A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
Taken more literally, this question is asking about the joint probability: P( invasion & suspension of diplomatic relations)
But in English, the question could be read as: “A Russian invasion of Poland and, given this invasion, a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
in which case participants making that interpretation might give P( suspension of diplomatic relations | invasion )
It’s not at all clear that the participants interpreted these questions the way the experimenters thought.
What I mean is, I might not have any particular notion of what could cause a complete suspension of diplomatic relations, and give it, say, .01 probability. Then, when asked the second question, I might think “Oh! I hadn’t thought of that—it’s actually quite likely (.5) that there’ll be an invasion, and that would be likely (.5) to cause a suspension of diplomatic relations, so A^B has a probability of .25 (actually, slightly more). Of course, this means that B has a probability higher than .25, so if I can go back and change my answer, I will.”
(These numbers are not representative of anything in particular!)
I do agree, however, that the literature overall strongly suggests the veracity of this bias.
In the case of Russia/Poland question the subjects were professional political analysts. For this case in particular we can assume that for these subjects the amount of information about political relations included in the question itself is insignificant.
Curi, you are just confused. But rather than asking questions or attempting to resolve your own confusion, you’ve gone and loudly, falsely accused some good research of being shoddy. You have got to learn to notice when you’re confused. You are trying to deal with confusing topics, and you have approached them with insufficient humility. You have treated criticisms as foes to be fought, and your own ignorance as a castle to be defended. Until you can stop yourself from doing that, you will make no progress, and we cannot help you.
But the research isn’t claiming that the conjunction fallacy exists for all X and Y. The claim, as I understand it, is that people rely on the representativeness heuristic, which is indeed useful in many contexts but which does not in general obey the probability-theoretic law that P(X&Y)<P(X).
Further evidence for the conjunction fallacy was covered on this site in the post “Conjunction Controversy”.
The strangest thing about this post is how everything in anthropomorphised. It’s as if what is at stake isn’t determining a relatively isolated matter of fact, but rather ostracizing a social rival.
Explanations are models, not social beings you can hope to subdue or isolate by making them unpopular.
Something that causes angst needn’t be intending to do so. The frustration this idea causes you is in you, not an evil part of this idea. The facts in question are just the way the world is, not a kid in school who is to be slandered for being your rival, whom you can disempower by convincing others s/he is a very biased bad person.
A conjecture: in general, the chances of being right about something are inversely proportional to how closely your model for that thing matches your self-model, which is what you use by default when you don’t understand the specific causal mechanisms at play. This post is an extreme example of that phenomenon.
If you’re getting angry at patterns of human cognition for “claiming to exist” because they “really” “do not exist”, you’re simply doing it wrong.
I’m not angry. Why do you assume people you disagree with are angry?
What kind of words do you think should be used to describe the contents of theories? e.g. take
You think that’s anthropomorphic and should be replaced by what sentence?
This could be unpacked in several ways: “Why do you always assume people you disagree with are angry?”, “Why do you sometimes but not always assume people you disagree with are angry?”, and an equivocation which pretends to the advantages of each statement.
“Why do you always assume people you disagree with are angry?”, would be silly for you to say after only having read my few LW posts. If true, it would be a severe reasoning flaw on my part.
“Why do you sometimes but not always assume people you disagree with are angry?”, in turn has several possible meanings. “Assumption” could mean an arbitrary prior I have before considering the subject I am analyzing, or alternatively it could mean an intermediate conclusion that is both reasoned and a basis for other conclusions. It may also mean something I’m not thinking of, and like other words, it of course may be an equivocation in which the speaker seeks to mean different parts of coherent but incompatible definitions.
If “assumption” means “prior”, your statement highlights that my assumptions may be silly for an additional reason than their being false, as read this way it would suggest that my reasoning is not just misaligned from the reality about which I am trying to reason, but that my reasoning isn’t even consistent, with my reasoning dependent on which of several arbitrary reading assumptions I select without considering the external world. As an analogy, not only do I attempt to map cities around mine without looking at them, my maps contradict each other by allocating the same space to different cities in an impossible way. “Prior” is often the intended way to interpret the word “assumption”, so a definition along these lines springs naturally to mind. However, you would have even less justification for this meaning than the previous one.
If “assumption” means “intermediate conclusion that is both reasoned and a basis for other conclusions”, which it sometimes does, then your statement wouldn’t be at all unflattering to me or relevant.
Something that would at first appear to be an equivocation might itself not be logically flawed, but its meaning might be difficult to track. For instance, one could say “Why do you think some people you disagree with are angry?” “Think” and “some” don’t presuppose anything about the reason behind the thought. Knowing that I think you are angry, you would be correct to use a term that encompasses me either assuming or concluding that you are angry, as you don’t know which it is.
“Why do you assume people you disagree with are angry?” depends on being interpreted as “Why do you sometimes think, whether as a prior or an intermediate conclusion, that some people you disagree with are angry?” to seem a grammatical question that does not make unreasonable assumptions. However, in the context of responding and trying to rebut a criticism of your thoughts, your statement only performs the work it is being called on to do if its answer casts doubt on my thinking.
“Why do you assume all people you disagree with are angry?”, “Why do you arbitrarily assume some people you disagree with are angry?”, and “How could you reasonably conclude all people you disagree with are angry?” are questions to which no one could give a reasonable explanation, taking for granted their assumptions. These are powerful rebuttals, if they can be validly stated without making any assumptions.
Though there is no explanation to them, there is an explanation for them, a response to them, and an answer to them, namely: you are making incorrect assumptions.
Here, your statement attempts to borrow plausibility from the true but irrelevant formulation without assumptions from which there is no conclusion that my thinking is flawed, while simultaneously relying on the context to cast it as a damning challenge if true—which a different sentence in fact would be. Your equivocation masks your assumptions from you.
As to why I think that...well, I started with the second paragraph, so we can start there.
(Pejorative) research you disagree with is designed to prove people are (pejorative). Supposed intended implications that you despise, which are so important they aren’t at all addressed...which of course would means that you have no evidence for them...and are confabulating malice both where and as an explanation behind what you do not fully understand or agree with.
To clarify the part you were curious about:
You apparently sometimes assume people are angry on minimal evidence and without explaining your reasoning. You did it with me, and I don’t think this is a personal grudge but something you might do with someone else in similar circumstances. In particular, I think you may have tried to judge my emotional state from the on topic content and perhaps also style of my writing. I understand that in some cases, with some people, such judgments are accurate. I don’t think this is such a case, and I don’t think you had any good evidence or good explanation to tell you otherwise.
I think that you had some kind of reason for thinking I was angry, but that you left it unstated, which shields it from critical analysis.
The way I read your statements, you assumed I was angry (based presumably on unstated reasoning) and drew some conclusions from that, rather than asserting I was angry and arguing for it. E.g. you suggested that I got angry and it was causing me to do it wrong. (BTW that looks to me like, by your standards, stronger evidence that you were angry at me than the evidence of my anger in the first place. It is pejorative. I suspect you actually meant well—mostly—but so did I.)
People can dislike things without being angry. Right? If you disagree with me about the substance of my criticisms, ad hominems against my emotions are not the proper response. Right?
Check out the second example here, where the subjects bet with real money in a way demonstrating the bias.
I think that the between-subjects versions of the conjunction fallacy studies actually provide stronger evidence that people are making the mistake. For instance*, one group of subjects is given the description of Linda and asked to rank the following statements in order, from the one that is most likely to be true of Linda to the one that is least likely to be true of her:
a) Linda is a teacher in elementary school
b) Linda works in a bookstore and takes Yoga classes
c) Linda is a psychiatric social worker
d) Linda is a member of the League of Women voters
e) Linda is a bank teller and is active in the feminist movement
f) Linda is an insurance salesperson
A second group of subjects does the same task, but statement (e) is replaced by “Linda is a bank teller.”
In this version of the experiment, there should be no ambiguity about what is being asked for, or about the meaning of the words “likely” or “and”—the subjects can understand them, just as statisticians and logicians do. But ‘feminist bank teller’ still gets judged as more likely than ‘bank teller’ relative to the alternatives; that is, subjects in the first group give statement (e) an earlier (more likely) ranking than subjects in the second group.
Some people don’t like the between-subjects design because it’s less dramatic, and you can’t point to any one subject and say “that guy committed the conjunction fallacy”, but I think it actually provides more clear-cut evidence. Subjects who saw a conjunctive statement about Linda being a bank teller and a feminist considered it more likely than they would have if the statement had not been a conjunction (and had merely included “bank teller”). This isn’t a trick, it’s an error that people will consistently make—overestimating the likelihood that a very specific or rare statement applies to a situation, even considering it more likely than they would have considered a broader statement which encompasses it, because it sounds like it fits the situation.
* This study design has been used, with these results, I think in Kahneman and Tversky’s original paper on representativeness. But I didn’t take the time to find the paper and reproduce the exact materials used, I just googled and took the 6 statements from here.
Looking at some of this, I wonder if people are biased towards thinking that the more concrete statement is more likely? Somehow, in my mind, “feminist” is more abstract than “feminist book keeper”. The latter seems closer to being a person, whereas the former seems to be closer to a concept. The more descriptive you are about a subject, the more concrete it sounds, and thus the more likely it is, because it sounds closer to reality. The less descriptive, the more abstract it sounds, and therefore the less likely it is, because it sounds more hypothetical or “theoretical”.
Of course, the more descriptive account is going to have more conjunctions, and therefore has lesser or the same probability. I just wonder if this has been taken into account.
Note for example the contextual hints in
which bias it.
This post is pretty incoherent, but I was able to get from it that you had done basically no research on the conjunction fallacy. So here, have a random paper.
Fallacies are not hard and fast rules about the way people think. The conjunction fallacy is an example of something people SOMETIMES do wrong when thinking, and saying that it only happens in specific situations makes it not exist is like saying ad hominem attacks don’t exist because you’ve never used one.
I think it’s worthwhile that you brought this up (and I’d love to see a discussion on the same issue of questionable phrasing in the Wason selection test, where I suspect your criticism might be valid), but I don’t think you’re right.
I mean, yes, most people would get the [cancer cure/cancer cure+flood right]. It’s easy. But the example of Linda the [bank teller/bank teller+active feminist] does have exactly the same logical structure. And people get it wrong because of the way they’re used to using language without thinking very hard in this context, yes.
Kinda like the way that people can fall for “if all X’s are A, B, and C, then anything that’s A, B, and C must be an X,” but not “if all dalmations have spots, then anything with spots is a dalmation”. Cuz we’ve got this screwy verb “to be” that sometimes means ‘is identical with’, sometimes ‘is completely within the set of”, sometimes ‘is one example of something with this property’ etc etc, and we have to rely on concrete examples oftentimes in order to figure out which applies in a particular context.
And maybe this is largely genetic, or maybe humans would reliably come to see the correct answers as obvious if they were raised in an environment where people talked about these issues a lot in a language that was clear about it. I dunno.
But I do know that I have run in to problems following the same pattern as the Linda the [bank teller/bank tell+active feminist] “in the wild”, and made mistakes because of them. I have created explanations and plans with lots of burdensome details and tricked myself into thinking that made them more likely to be right, more likely to succeed. And I’ve been wrong and failed because of that.
And I don’t think I’m unusual in this.
And I do know that I’ve avoided making this mistake at least several times after I learned about these studies and understood the explanation for why it’s wrong to rate “linda is a bank teller and an active feminist” as more likely.
And I don’t think I’d be unusual in that.
So I think the conjunction fallacy does exist.
But it doesn’t have the same structure as presented to the person being asked. They specifically put in a bunch of context designed so that if you try to use the contextual information provided you get the “wrong” answer (literally wrong, ignoring the fact that the person isn’t answering the question literally written down).
It’s like if you added the right context to the flood one, you could get people to say both. Like if you talked about how a flood is practically guaranteed for a while, and then ask, some people are going to look at the answers and go “he was talking about floods being likely. this one has a flood. the other one is irrelevant”. Why? Well they don’t know or care what you mean by the technical question, “Which is more probable?”. Answering a different question than the one the researcher claims to have communicated to his subjects is different than people actually, say, doing probability math wrong.
Yes, they put in a bunch of irrelevant context, and then asked the question. And rather than answering the question that was actually asked, people tend to get distracted by all those details and make a bunch of assumptions and answer a different question.
Say the researchers put it this way instead: “When we surround a certain simple easy question with irrelevant distracting context, we end up miscommunicating very badly with the subject.
Furthermore, even in ‘natural’ conditions where no-one is deliberately contriving to create such bad communication, people still often run into similar situations with irrelevant distracting context ‘in the wild’, and react in the same sorts of ways, creating the same kind of effects.
Thus people often make the mistake of thinking that adding more details to explanations/plans makes them more likely to be good, and thus do people tend to be insufficiently careful about pinning down every step very firmly from multiple directions.”
That’s bending over backwards to put the ‘fault’ with the researchers (and the entire rest of the world), and yet it still leads to the same significant, important real world day-to-day phenomena.
That’s why I think you’re “wrong” and the “conjunction fallacy” does actually “exist”.
Or, to put it another way, “conjunction fallacy exists—y/n” correlates with some actual feature that the territory can meaningfully have or not have, and I’m pretty damn sure that the territory does have it.
However! I would like to talk about how the same criticism might apply to the Wason selection task, and the hypothesis that humans are naturally bad at reasoning about it generally, but have a specialized brain circuit for dealing with it in social contexts.
Check it. Looking at the wikipedia article, and zeroing in on this part:
I know I had to translate that to “even number cards always have red on the other side” before I could solve it, thinking, “well, they didn’t specifically say anything about odd number cards, so I guess that means they can be any color on the other side”.
Because that’s the way people would normally phrase such a concept (usually seeking a quick verification of the assumption about it being intended to mean odds can be whatever, if there’s anyone to ask the question).
And we do that cuz English aint got an easy distinction between ‘if’ and ‘if and only if’, and instead we have these vague and highly variable rules for what they probably mean in different contexts (ever noticed that “let’s” always seems to be an ‘inclusive we’ and “let us” always seems to be an ‘exclusive we’? Maybe you haven’t cuz that’s just an oddity of my dialect! I dunno! :P).
And maybe in that sort of context, for people who are used to using ‘if’ normally and not in a special math jargon way, it seems to be more likely to mean ‘if and only if’.
And if the people in the experiment are encouraged to ask questions to make sure they understand the question properly, or presented with the phrasing “even number cards always have red on the other side” instead, maybe in that case they’d be as good at reasoning about evens and odds and reds and browns on cards as they are at reasoning about minors and majors and cokes and beers in a bar.
But that seems like an obvious thing to check, trying to find a way of phrasing it so that people get it right even though it’s not involving anything “social”, and I’d be surprised if somebody hasn’t already. Guess I oughta post my own discussion topic on that to find out! But later. Right now, I’ma go snooze.
Great post.
The thing that bemuses me most at the moment is how bad I am at predicting the voting response to a post of mine on LW. Granted that I and most people upvote and downvote to get post ratings to where we think they should be, rather than rating them by what we thought of them and letting the aggregate chips all where they may, I still can’t do well predicting it (that factor should actually be improving my accuracy).
This post of yours is truly fantastic and a moral success; you avoided the opportunities to criticize flaws in the OP and found a truly interesting instance of when an argument resembling that in the OP would actually be valid.
(This factor applies most significantly to mid-quality contributions. There is a threshold beyond which votes tend to just spiral off towards infinity.)
I think this is the only example I have seen of a negative infinity spiral on LW. The OP shouldn’t be too discouraged, if what you are saying is right (and I think it is). Content rated approaching negative infinity isn’t (adjudged) that much worse than content with a high negative rating.
In a way, this post is correct: the conjunction fallacy is not a cognitive bias. That is, thinking “X&Y is more likely than X” is not the actual mistake that people are making. The mistake people make, in the instances pointed out here, is that they compare probability using the representativeness heuristic—asking how much a hypothesis resembles existing data—rather than doing a Bayesian calculation.
However, we can’t directly test whether people are subject to this bias, because we don’t know how people arrive at their conclusions. The studies you point out try to assess the effect of the bias indirectly: they construct specific instances in which the representativeness heuristic would lead test subjects to the wrong answer, and observe that the subjects do actually get that wrong answer. This isn’t an ideal method, because there could be other explanations, but by conducting different such studies we can show that those other explanations are less likely.
But how do we construct situations with an obviously wrong answer? One way is to find a situation in which the representativeness heuristic leads to an answer with a clear logical flaw—a mathematical flaw. One such flaw is to assign a probability to the event X & Y greater than the probability of X. This logical error is known as the conjunction fallacy.
(Another solution to this problem, incidentally, is to frame experiments in terms of bets. Then, the wrong answer is clearly wrong because it results in the test subject losing money. Such experiments have also been done for the representativeness heuristic)
I like this post because it exposes some of my own faulty thinking.
I find myself less likely to look in to what David Deutsch says about anything because of the manner you present your ideas. This isn’t fair, maybe you’re misrepresenting what he actually says. Maybe he explains it in a less grating fashion. Whatever his actual thoughts, it’s not fair to him that I’m discounting what he has to say...and yet, I can’t help it!
Very similar to point #1, I had a very difficult time reading all of your post because of things like:
… To me, the language of this sentence seems combative and off-putting, and in fact much of the post does. I wasn’t able to put much thought into what you had to say because of this. The attitude conveyed seems to be “Haha, you guys are so dumb for believing the conjunction fallacy exists!”.
I should be able to read past the attitude it felt like I was getting from your writing and understand the points you were trying to make...but I confess it was very difficult to do so!
And finally (this is supposed to be item #3, but the software makes it a #1...)
Does it? I was never under the impression that this was what the conjunction fallacy showed. I may be wrong, but it doesn’t matter to my broken mind. This was another strike against your post, and I wish it hadn’t made it more difficult to read the rest of it. You could have expanded on why you felt the conjunction fallacy made such a sweeping generalization, but you didn’t and it made it harder for me to read.
Deutsch’s The Fabric Of Reality, the only book of his I’ve read, is very well-written and has much that LessWrongers would agree with or find interesting, though I disagree with some of it. The version of Deutsch’s ideas that curi is presenting seems to me utterly wrong-headed. This suggests to me that either Deutsch has suffered some catastrophic failure of reasoning (possibly due to severe brain injury) in the intervening decade-and-a-half, or that curi is misrepresenting his ideas.
Any specific details? Why do you think I’m misrepresenting his ideas? Do you know anything about my relationship with him? Or about how much I’ve studied his ideas, compared to how much you have? Can you point out anything I said which contradicts anything from FoR?
It seems to me you are claiming the authority of having read his book, to know what he thinks. In this regard you are outclassed. And as to rational argument … you don’t give any.
One reason I don’t edit out “combative” language is I don’t interpret it as combative or emotional. It is the literal truth—at least least my best guess at such. I don’t read things in the emotional way that some people do, and I’m not other-people-oriented, and I don’t want to be, so it’s hard to remember the parochial and arbitrary rules others want me to follow to sound nice to them. And I don’t mind if the people too biased to read past style issues ignore me—I consider it a benefit that they will self-select themselves for me like that.
I don’t want to or like to spend my mental effort remember other people’s biases in order to circumvent them. And I’m content with the results. The results I’ve gotten here are not representative (that is, I don’t offend everyone. I have control over that). And the assumptions you’re probably making about my goals here (required to judge if I got what I wanted, or not) are not correct either.
It’s supposed to be a universal issue (which happens some significant portion of the time, not every time). That’s the only way it can demonstrate that it has anything to do with X&Y vs Y. If it’s merely tricks of the type I describe, the X&Y vs Y part is irrelevant and arbitrary; random things of the form X&Y vs Y in general won’t work for it, and you could easily do it with another logical form.
If you read his books you will find the style less grating.
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
I don’t think I’m making any assumptions about this. What are your goals? I can think of several off the top of my head:
Convince others of your ideas.
Get feedback on your ideas to help you further refine them.
Perform a social experiment on LW (use combative tone, see results)
Goal #3 and related “trollish” goals are the only ones I can think of off the top of my head that benefit from a combative tone.
I mean...why write at all if you don’t want most people to look at the actual content of your writing?
These aren’t attacks in question form. These are honest questions.
I personally enjoyed confirming that this thread sent curi below the threshold at which anyone who is not a prominent SIAI donor can make posts. Does that count?
See this comment:
Haven’t you noticed some people said positive things in comments?
You say you aren’t making “any assumptions” but then you give some. As an example, I’m not here to do (1) -- if it happens, cool, but it’s not my goal. That was, apparently, your assumption.
The halfway sane people can read the content. I haven’t actually said anything I shouldn’t have. I haven’t actually said anything combative, except in a few particular instances where it was intentional (none in this discussion topic). I’ve only said things that I’m vaguely aware offend some silly cultural biases. The reason I said “biased idiots” is not to fight with anyone but because it clearly and accurately expresses the idea I had in mind (and also because the concept of bias is a topic of interest here). There is a serious disrespect for people as rational beings prevalent in the culture here.
Do people really get offended because of language? Maybe sometimes. I think most of it is not because of that. It’s the substance. I think the attitude behind the conjunction fallacy, and the studies, and various other things popular here, is anti-human. It’s treating humans like dirt, like idiots, like animals. That’s false and grossly immoral. I wonder how many people are now more offended due to the clarification, and how many less.
That’s what I thought. And if I didn’t, saying it would change nothing because asserting it is not an argument.
Unless, perhaps, it hadn’t even occurred to me at all. I think there’s a clash of world views and I’m not interested in changing mine for the purpose of alleviating disagreement. e.g. my writing in general is focussed on better people than the ones who it wouldn’t occur to to consider that questions weren’t attacks. They are the target audience I care about, usually. You’re so used to dealing with bad people that they occupy attention in your mind. I usually don’t want them to get attention in mind. There’s better people in the world, and better things to think about.
People aren’t completely rational beings. Pretending they are is showing more disrespect than acknowledging that we have flaws.
Not like dirt. Not like idiots. As though we sometimes act as idiots, yes. Because we sometimes do. You seem to have trouble confusing “always” and “sometimes”.
We are, in fact, animals. We’re a type of ape that has a very big brain, as primates go. We have many differences from the rest of the animals, but the similarities with other animals should be clear enough that it would be a severe mistake to not call us animals.
Ah, now this is a very honest and revealing statement. These are two separate issues, of course. Statements can be true or false, and actions can be grossly immoral, and not the reverse. Yet they’re linked in your mind. Why is that? Did you decide theses statements(*) were false and thus holding them grossly immoral (or to be charitable, likely to lead to grossly immoral actions), or were you offended at the statements and thus decided they must be false?
Not offended. Just saddened.
(*) Rather, your misunderstanding of them. “People can think P(X&Y) > P(X)” is not the same as “People always think P(X&Y) > P(X) for all X and Y”. Yes, of course special X and Y have to be selected to demonstrate this. There is a wide range between “humans aren’t perfect” and “humans are idiots”. The point of studies like this is not to assert humans are idiots, or bad at reasoning, or worthless. The point is to find out where and when normal human reasoning breaks down. Not doing these studies doesn’t change the fact that humans aren’t perfectly rational, it merely hides the exact contours of when our reasoning breaks down.
They are not separate issues. The reason for the false statements is the moral agenda.
The possible (asserted, not shown) existence of an agenda might indeed have something to do with what studies were done and how the interpreters presented them. The morality or immorality of this agenda is what is a separate issue.
This seems terribly backwards to me. In order to become more rational, to think better, we must examine our built-in biases. We don’t do and use such studies simply to point out the results and laugh at “idiots”; we must first understand the problems we face, the places where our brains don’t work well, so that we can fix them and improve ourselves. Utilizing the lens that sees its flaws is inherently pro-human. (Note that “pro-human” should be taken to mean something reasonable like “wanting humans to be the best that they can be,” which does involve admitting that humans aren’t at that point right now. That is only “anti-human” in the sense that Americans who want to improve their country are “anti-America.”)
I wasn’t talking about examining biases that exist. I was talking about making up biases, and finding ways to claim the authority of science for them.
These studies, and others, are wrong (as I explained in my post). And not randomly. They don’t reach random conclusions or make random errors. All the conclusions fit a particular agenda which has a low opinion of humans. (Low by what standard? The average man-on-the-street American Christian, for example. These studies have a significantly lower opinion of humans than our culture in general does.)
Can you understand that I didn’t say anything about laughing? And that replies like this are not productive, rational discussion.
People who demean all humans are demeaning themselves too. Who are they to laugh at?
Do you understand that I didn’t say that the “lens that sees its flaws” was the anti-human?
Can you understand that the notion that our brains don’t work well, in some places, is itself a substantive assertion (for which you provide no argument, and for which these bad studies attempt to provide authority)?
Can you see that this claim is in the direction of thinking humans are a less awesome thing than they would otherwise be?
I’ll stop here for now. If you can handle these basic introductory issues, I’ll move on to some further explanation of what I was talking about. If you, for example, try to interpret these statements as my substantive arguments on the topic, and complain that they aren’t very good arguments for my initial claim, then I will stop speaking with you.
Please, please, please stop speaking with us. Uncle! Uncle!
Your post only served to highlight your own misunderstanding of the topic. From this, you’ve proceeded to imply an “agenda” behind various research and conclusions that you deem faulty. To be blunt: you sound like a conspiracy theorist at this point.
http://wiki.lesswrong.com/wiki/Bias.
Also, at this point it sounds like you’re throwing out all studies of cognitive bias based on your problems with one specific area.
Having a low opinion of default human mental capabilities does not necessarily extend to having a low opinion of humans in general. Humans are not static. Having an undeservedly high opinion of default human mental capabilities is a barrier to self-improvement.
Can you understand this Paul Graham essay
And see where on it your above comment falls?
I’m familiar with it. Why don’t you say which issue you had in mind.
Do you understand that my goal wasn’t to be as convincing as possible, and to make it as easy as possible for the person I was talking to?
You constantly attack trivial side-points, and peculiarities of wording, putting you somewhere in the DH3-5 region, not bad but below what we are aiming for on this site, which is DH6 or preferably Black Belt Bayesian’s new DH7 (deliberately improving you opponents arguments for them, refuting not only the argument but the strongest thing which can be constructed from its corpse).
This is a good example. Yes, he used the word laugh and you didn’t, but that is not important to the point he was making, nor does it significantly change the meaning or the strength of the argument to remove it. If wish to reach the higher levels of the DH you should try to ignore things like that and refute the main point.
This is another example. You never used the phrase ‘lens that sees its flaws’ but since the heuristics and biases program is absolutely central to the lens that sees its flaws and you accused that of being anti-human his central point still stands.
Side-points are not trivial, and they are not unimportant, nor irrelevant (some can sometimes be irrelevant. These aren’t, in my view.)
Every mistake matters. Learning is hard. Understanding people very different than you is hard. Agreeing about which points are trivial, with people who see the world differently, is hard to.
To deal with the huge difficulty of things like learning, changing minds, making progress, it’s important to do things like respect every mistake and try to fix them all, not dismiss some as too small and gloss them over. Starting small is the best approach; only gradual progress works; trying to address the big issues straight away does not work.
New? Do you really think that’s a new idea? Popper talked about it long ago.
But I regarded it as important in several ways.
But 1) I think it does 2) I think that by rewriting it without the mistake he might learn something; he might find out it does matter; we might have a little more common ground. I don’t think skipping steps like this is wise.
You do not take into account time constraints. I have not been keeping count, but discussions with you alone have probably eaten up about 2 hours of my time, quite a lot considering the number of other tasks I am under pressure to perform.
Your approach may work, but in terms of progress made per second, it is innefficient. Consider how long it takes for every mistake:
1) First you have to notice the mistake
2) Then you have to take the time to write about it
3) Then the other person has to take the time to read it
4) Then they have to take the time to explain it to you / own up to it
5) Then they have to pay extra attention while they type, for fear they will make a similar mistake and you will waste their time again
6) Then, everybody else reading the comments will have to read through that discussion, and we really don’t care about such a tiny mistake, really, not at all!.
Is it worth it for the tiny gains?
I think a better approach is to go straight for the central claim. If you and they continue to disagree, consider the possibility that you are biased, and question why their point might seem wrong to you, what could be causing you not to see their point, do you have a line of retreat? (You trust that he will be doing the same, don’t make accusations until you are sure of yourself). If that doesn’t work, you start tabooing words, an amazingly high proportion of the time the disagreement just vanishes. If that doesn’t work you either continue to look for reasons why you might be wrong,, and continue explaining your position more clearly and trying to understand theirs.
You do not nitpick.
It wastes everybody’s time.
If Popper talked about it then why don’t you do it!
The more you’re having a hard time communicating and cooperating with people, the less you should skip steps. So, I will consider the best version in my mind to see if it’s a challenge to my views. But as far as trying to make progress in the discussions, I’ll try to do something simpler and easier (or, sometimes, challenging to filter people).
Rushing doesn’t solve problems better. If people haven’t got time to learn, so be it. But there’s no magic short cuts.
I disagree. So many conversations have basically 0 progress. Some is better than none. You have to do what actually works.
Yes. That’s exactly where human progress comes from.
But there’s so much disagreement in background assumptions to sort out, that this often fails. If you want to do this, publish it. Get 100,000 readers. Then some will like it. In a one on one conversation with someone rather different than you, the odds are not good that you will be able to agree about the central claim while still disagreeing about a ton of other relevant stuff.
Sometimes you can do stuff like find some shared premises and then make arguments that rely only on those which reach conclusions about the central claim. But that kind of super hard when you think so differently that there’s miscommunication every other sentence.
People so badly underestimate the difficulty of communicating in general, and how much it usually relies on shared background/cultural assumptions, and how much misunderstanding there really is when talking to people rather different than yourself, and how much effort it takes to get past those misunderstandings.
BTW I would like to point out that this is one of the ways that Popperians are much more into rigor and detail than Bayesians, apparently. I got a bunch of complaints about how Popperians aren’t rigorous enough. But we think these kinds of details really matter a lot! We aren’t unrigorous, we just have different conceptions of what kind of rigor is important.
Yes I do that. And like 3 levels more advanced, too. Shrug.
You’re used to talking with people who share a lot of core ideas with you. But I’m not such a person. Tabooing words will not make our disagreements vanish because I have a fundamentally different worldview than you, in various respects. It’s not a merely verbal disagreement.
Fear is not the right attitude. And nor is “extra attention”. One has to integrate his knowledge into his mind so that he can use it without it taking up extra attention. Usually new skills take extra attention at first but as you get better with them the attention burden goes way down.
Are you interested in changing your worldview if it happens to be incorrect?
That’s a good way to get caught in an affective death spiral.
Yes of course.
I didn’t say I was making any of those assumptions. In fact, I specifically said I wasn’t assuming any of them.
You didn’t say that. Apparently you meant you were stating possible, not actual goals. But you didn’t say that either. Your bad. Agreed?
No, but I don’t see any point in continuing this.
This basically reads as, “The point of writing is for other people to divine what’s inside my head. There’s no point in me actually trying to communicate this clearly. I’ll write whatever I feel like and the burden is on them to figure it all out.”
Using lots of conclusory and inflammatory language makes your writing vastly less compelling. It’s not a far cry from saying, “Everyone who disagrees with me a is a doo-doo head!” Writing to convince people has rules and forms you need to follow, because if you don’t, you will fail at convincing people. If your goal is merely the satisfaction of your own ego, I suppose this isn’t a problem.
In other words, if you have no desire to actually communicate successfully or persuade people, by all means, keep doing what you’re doing.
This post that you have excreted has essentially zero content. You restate the core idea behind the representativeness heuristic repeatedly, and baldly assert that there are good reasons for people having the intuitions that they do, that people are “using valuable real life skills” when they give incorrect answers to questions. No ones arguing that it hasn’t been an evolutionarily useful heuristic, just that it happens to be incorrect from time to time. I cannot figure out where in your post you actually made an argument that the conjunction fallacy “doesn’t exist”, and I am overjoyed that you no longer have the karma to make top-level posts.
Please stop posting and read the sequences.
The point of the conjunction fallacy is that, under specific circumstances, people’s ability to estimate probability will reliably misfire, i.e. people are biased.
This does not require that people’s estimates always misfire. You seem to have some odd personal stake in this; the conjunction fallacy is not some great criticism of people’s moral worth. It is merely an observation that the human brain does not function optimally, much like the existence of blind spots.
I can think of two ways this article represents a coherent point: first, it is pointing out that we do not always miscalculate. This is an obvious fact and does not conflict with the bias in question.
Second, you might be complaining that the circumstances under which people exhibit this bias are so unlike reality a to be irrelevant. You provide no evidence and virtually no argument to this effect. Indeed, people make off-the-cuff estimates without consulting the laws of probability all the time. If your point is that it is of no practical consequence, you did not get anywhere near demonstrating that.
Also, you said that a study is unbiased only if it confirms your view, then asked for unbiased studies contradicting your view. I hope the problem there is now apparent to you.
Where do people claim that? I’ve seen the opposite claimed quite often. You seem to be strawmanning.
The first one is flawed, IMO, but not for the reason you gave (and I wouldn’t call it a ‘trick’). The study design is flawed. They should not ask everyone “which is more probable?” People might just assume that the first choice, “Linda is a bank teller” really means “Linda is a bank teller and not active in the feminist movement” (otherwise the second answer would be a subset of the first, which would be highly unusual for a multiple choice survey).
The Soviet Union study has a better design, where people are randomized and only see one option and are asked how probable it is.
Yes. The bank teller example is probably flawed for that reason. When real people talk to each other, they obey the cooperative principle of Grice or flout it in obvious ways. A cooperative speak would ask whether Linda was a bank teller who was active in the feminist movement, or if Linda was bank teller regardless of whether she was active in the feminist movement (showing that one answer included the other).
Eliezer addresses this possibility in Conjunction Controversy:
Yet Eliezer is strangely tossing out a lot of information. We don’t just know that she is a bank teller and anti-nuclear, we also know: “She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice.”
At least where I went to college, the conditional probability that someone was a feminist given that they were against nuclear power, majored in philosophy, and were concerned with discrimination and social justice was probably pretty damn high.
Lost me at:
Why are you on this website? Not joking.
“What is your goal here?”, or something like it might have been less confrontational phrasing.
How will I benefit from telling you?
You might, for instance, use this moment as an opportunity to signal that you aren’t a troll or indicate why you shouldn’t be banned, as I fear this fate awaits you if you continue on your present course.
By explaining your reasons for posting to this site you may get feedback suggesting how to better use this site to achieve your goals.
That, of course, depends on your reason—but it could be that you will receive information on how better to achieve your goals, indication that they are impossible or prohibitively costly to accomplish (thus saving you time and effort), the good will of those interacting with you, confirmation that you are acting effectively....
Your goals may conflict with this, however. For instance, if they are to reap discord and vitriol, or to test the community, or end civilization, or to hide your goals, or bring low the proud, or find a worthy sparring partner, you may not benefit from such transparency.
Have a care. Remember absence of evidence is only weak evidence of absence, and that, as you have mentioned, there are many possible explanations.
Probability measure is spread pretty thin in these parts.
Not always. If someone has no symptoms, the probability that they have, say, pneumonia drops considerably.
Awesome. This needs to be one of the first lines a protagonist hears after entering a cantina in a science fiction Space Opera...maybe rationalist fanfiction like HPATMOR.
Nature will sometimes ask bad and pointless questions and reward anything less than a literally correct answer with grievous injury or death.
Thus it is said that taking humans as your opponents only leads to overconfidence. Nature will not be fair in the challenges she puts before you.
We don’t need everyone to have that skill.
Actually, we really do, given things like democracy and politics.
Post removed from main and discussion on grounds that I’ve never seen anything voted down that far before. Page will still be accessible to those who know the address.
It seems vote scores of EY posts about site-related actions and discussion turn into agree/disagree, so based on that the consensus is this was a good action.
But I would like to have a comment here at least voicing the opinion (my opinion on this coincidentally) that just displaying a low vote score is enough to warn people that don’t want to read things that get low vote scores.
Outliers are interesting. If it was me, I’d be checking it out.
I agree. ”...on grounds that I’ve never seen anything voted down that far before...” isn’t literally true in the strictest possible sense. Implicitly, it means the editor also did not think of any sufficiently compelling counter-arguments upon reading the piece.
So unless removal for being downvoted by orders of magnitude more than anything else (including those previously removed, to avoid evaporative cooling) was the rule before the editor removed the post, the content of the post had a good deal to do with its removal.
I am not saying it is the wrong decision because of this, but I would prefer the formulation: “Post removed from main and discussion on grounds that I’ve never seen anything voted down that far before, and it seems to me the downvotes are probably justified.”
It’s important not to imply the post’s removal wasn’t a matter of editorial judgement, since it was, not that there’s anything wrong with that.
Arguably no one will be misled, and the editorial explanation by EY is perfect and Gricean.
There’s “still something sort of awesome about a guy who could take on the entire western world from a cave somewhere.”. I began writing but did not post three posts of my own for this site because I think them not good enough. One is about Newcomb’s problem, one is about tattoos, and one is about cooking.
I have no issue with people making editorial judgements provided they are upfront about it. On the face of it, EY is saying he removed the post purely on the basis of popularity, as though he is deferring judgement to others. You assert he wasn’t and that no-one would be misled but if you just leave things implied there is a lot of scope for misunderstanding and it is a way of not taking responsibility.
So I’m curious. Pretending for a moment that you have actually approached this situation rationally, you must have had some significant amount of evidence that humans do not commit any kind of conjunction fallacy (except when “tricked”).
Given that you are so confident in this contention that you feel you can safely dismiss every one of the significant number of corroborating scientific studies a priori as “wrong and biased”, I’m wondering where got this evidence that humans never implement faulty probability estimation techniques except when being tricked, and why nobody else seems to have seen it.
Nope. I don’t have any evidence you don’t have. I don’t think the disagreement is primarily about evidence but philosophical understanding and explanation.
I’ve looked at some. Found they weren’t even close to meeting the proper standards of science. I’ve run into the same problem with other kinds of studies in various fields before. I have general purpose explanations about bad studies. They apply. And no one, on being challenged, has been able to point me to any study that is valid.
I have explanations that probability estimating isn’t how people think in the first place, in general. See: Popperian epistemology.
You mean, nobody else here seems to have seen it, on a site devoted to the idea of applying probability math to human thought.
I wonder what the downvote pattern here would be if people wouldn’t see the existing downvotes in the −20 range along with the post, but would actually need to read the text and individually decide whether to downvote, upvote or abstain with no knowledge of the other votes.
What if the displayed karma, at least to anyone else other than the author, would be shown as just 0 or −1 if it was any amount into the negative? Then there wouldn’t be an invitation to join the happy fun massive downvote pile-on party without even reading the post.
My model predicts the opposite of what you seem to be suggesting, actually. Downvotes are a limited resource, even if the limit isn’t a restrictive one, so I expect that at least some users will decline to downvote in order to conserve that resource if they see that the point has been made. Even without that particular pressure, it seems likely to me that some people just won’t bother to downvote if it’s already clear that something isn’t approved of. The latter is a bit like the bystander effect—if one knows that someone else has already taken care of the problem, one won’t feel much pressure to do so as well.
We’re probably both right, in a sense—when this kind of question has come up before, there have tended to be a mix of responses, so both things are probably happening. I do think that on net we’d see more downvotes if the existing balance wasn’t public until people voted, though, even if ‘abstain’ counted as a vote so that people who were just curious weren’t forced to weigh in.
Yeah, I don’t actually have a very solid anticipation on what would happen, though I would be more surprised if the result was even more downvotes. Still, you do make a case why it might well be.
I’m not sure I’d mind a lot of people reading something and individually deciding to downvote it, that would just do what the score is originally supposed to and sum a lot of individual assessments. What I don’t like is getting the pile-on effect from both showing the current score and the downvote options. It makes downvoting stuff already in the negative further feel icky. It also makes it feel a lot more symbolic to upvote something you see far in the negative or downvote something far in the positive.
And I didn’t even know downvotes cost something. I see a mention of a cap on an old thread, but I don’t see my downvote mana score anywhere in the UI.
I’m not sure I’m parsing this properly—it sounds to me like you’re saying that seeing a very low score along with the option to lower it further makes you less likely to actually downvote, because it feels like ganging up on someone, and more likely to upvote it, compared to not seeing the existing score?
Does it change anything if you specifically focus on the fact that upvotes and downvotes are supposed to mean “I want to see more/less like this” rather than “I think this is good/bad” or “I think this is right/wrong” or “I like/dislike the point being made”?
The downvote limit isn’t displayed anywhere, until you hit it. (And I’m not sure you get a proper notification then, either.) But the algorithm is that you can make four times as many downvotes as your karma score, and then you can’t make any more downvotes until your karma goes up. It’s not something that most people will ever run into, I suspect, but knowing that it’s possible seems likely to have an effect—and possibly a stronger one given the uncertainty of the size of one’s buffer, in fact.
Pretty much. And yeah, looks like I’m saying that I like behave the opposite way that I expect the most of the group to behave. But then again, I’m imagining both reacting more to a big, shiny sign saying “Look! This is what Us People are expected to think about this thing!” than the actual thing itself. I’m just generally reacting to anything with that sort of mindless, mob-like tone with “screw You People.”
It’s easy enough to imagine that when I’m the one doing or receiving the voting. When I’m seeing this stuff unfold elsewhere, I need to imagine that the downvoters and the downvotees are also seeing things in that way. Given that this requires extra thought, and the simple, instant interpretation is upvote good, downvote bad, I’m not having a very easy time imagining that’s the case.
Also given how human group psychology works, I’m not sure there’s that much of a difference there. Saying that you’d like to see less of the sort of thing the author implicitly thought people are interested in seeing by posting it doesn’t sound like that much less of a rebuke than just saying the thing was bad or wrong. You could probably make a case for it being a bigger rebuke, since it implies the author is too incompetent to figure out what’s good content, and the social inferior whom the downvoters can pass judgment on, rather than someone with equal social standing having a dispute with the others on whether something is good and right or bad and wrong.
This is at most only true in one respect: many implementors of the system primarily had such intent.
“It doesn’t even seem to have occurred to them that a voter might attach their own meaning to a vote for Stephen Colbert—that a voter might have their own thoughts about whether a vote for Stephen Colbert was “wasted” or not.”
Count me among those who saw downvoting this as pointless. It was already downvoted enough to be removed from the main page, and downvotes are limited. Also, all else equal, I feel bad when engaging in piling on. This post is what I think of as ideal: it responds to the OP by finding the limited case for which an argument resembling that of the OP would be right, so the OP can see the contrast without feeling attacked.
I read the post in the RSS feed, clicked through to downvote, and then didn’t because I thought “s/he’s suffered enough”. Arguably, with a monumentally wrong post like the OP, we might see more people downvoting if they didn’t know others had.
Before reading the post, I skimmed the comments. I see that the author’s replies to comments are clear evidence of failure. I don’t know, or care, if it’s trolling or merely defensiveness (I guess the latter is by far more likely).
As for the claim that the particular questions are designed to elicit wrong answers (and yes, the wrong answers are objectively wrong), so as to sex up the results, that’s obviously true. That obviousness is probably the reason for the initial downvotes.
I think the author has completely misunderstood the intent of “conjunction fallacy”. It’s not that people are more prone to favor a conjunction as more plausible in general, just because it’s a conjunction, it’s that it’s obviously wrong to consider a conjunction more likely than any one of its components.
I don’t know about front page posts, but a lot of comments stop getting downvotes at −2 or −3. I think they turn invisible for some people by default, so that’s why, rather than people thinking the point is already made.
On Hacker News, early voting trends usually continued to inspire more votes of the same kind. Except someones when someone would comment “Why is this getting downvoted” which could reverse it. They capped things to −4 points minimum now. Upvote orgies are still common. Some are due to quality, sure—some things deserve a lot of upvotes—but not all of them.
Hacker News seems to have a definite idea on curbing downvote sports by capping the point display when things go to negative. This might be a good idea. Another intentional thing they have is having no downvoting at all on the toplevel posts, but their posts are more links and short questions than original articles, so this might not fit LW that well.
Spurious upvoting is probably also going on, but I don’t see it as problematic as downvotes since it doesn’t come with a direct effect of anonymously telling people that they don’t belong on the forum.
Similar thing happens on reddit. I think it’s widespread across vote-based sites. Any counterexamples?
Others have addressed most of the issues but I’d like to address another issue.
This isn’t at all relevant. In the Soviet-Poland experiment the two groups were asked to give a probability estimate. Whether they are better in some sense is a distinct claim about whether or not the probability of one or the other is higher. Even if one prefers claims with explanatory power to those that don’t, this doesn’t make those claims more probable. Indeed, the fact that they are easier to falsify in the Popper sense can be thought of as connected in some way to the fact that in a Bayesian sense they are more improbable.
But you can’t argue that because the class of hypotheses is preferred people should be willing to assign a higher probability to them.
“Probability estimate” is a technical term which most people don’t know. That isn’t a bias or a fault. When asked to give one, especially using a phrase like “which is more probable” which is ambiguous (often used in non-mathematical sense. whereas the term “probability estimate” isn’t), they guess what you want and try to give that instead. Make sense so far?
But this is nullified in the colour die experiment of Tversky and Kahneman, as detailed in the Conjunction Fallacy article, as linked by HonoreDB in a comment in this very topic, as so far unaddressed by you.
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event. There was actual money on the line if the students chose a winning sequence. There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving “probability”.
I find your claim that people interpret “which is more probable” as asking something other than which is more probable to be dubious at best, but it completely fails to explain the above study result.
You’re calling lack of mathematical knowledge a bias? Before it was about how people think, now it’s just ignorance of a concrete skill...?
I’m not calling it anything, and I didn’t use the word “bias” once. Perhaps people who have studied probability theory would have fared better on the task. But this is kind of the point, isn’t it? The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
Nobody is saying that this heuristic isn’t most likely more than adequate in the majority of normal day-to-day life. You seem to be conflating the claim that people use some kind of representativeness heuristic, rather than correct probability analysis, in probability estimation problems—which appears to be true, see above—with some kind of strawman claim you have invented that people are stupid idiots whose probability estimation faculties are so bad they reliably get every single probability estimation problem wrong every time, lolmorons. People here are posting in defense of the first one, you keep attacking as if they are defending the second one.
The hypothesis
predicts the results of all the studies strongly. Your contrary hypothesis, which to me appears to be along the lines of
predicts the results of some of the studies, but not the red/green die rolling one.
So now we are waiting for your post hoc justification of how test subjects who knew they were seeking a strictly more probable case (because they were hoping to win money) still got the wrong answer.
Not intuitively. They’re in a situation outside their daily routine that they don’t have good intuition about. They are making a judgment not because they think they know what they are doing but because they were asked to.
Can you see how artificial situations are not representative of people’s ability to cope with life? How putting people in artificial situations designed to fool them is itself a biased technique which lends itself to reaching a particular conclusion?
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no “trickery” here.
To avoid more post-hockery, please propose a concrete experiment testing the conjunction fallacy, but controlling for “tricks” inherent in the design.
Not all the experiments are the same thing, just because they reached the same conclusion.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
The paper doesn’t describe the test conditions (that makes it unscholarly. scientific papers are supposed to have things like “possible sources of error” sections and describe their experimental procedure carefully. It doesn’t even say if it was double blind. Presumably not. If not, an explanation of why it doesn’t need to be is required but not provided.). There’s various issues there, e.g. pressure not to ask questions could easily have been present.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well. It has a consistent bias: people are better at real life than artificial situations designed in such a way that people will fail.
EDIT: Look at this sentence in the 1983 paper:
See. Biased. Specifically designed to fool people. They directly admit it. You never read the paper, right? You just heard a summary of the conclusions and trusted that they had used the methods of science, which they had not.
What that says is that the test in question addresses only one of two obvious questions about the conjunction fallacy. It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice. It’s like if they were trying to find out if overdosing on a certain vitamin can be fatal: They’ll give their (animal) subjects some absurd amount of it, and see what happens, and if that turns out to be fatal, then they go trying different amounts to see where the effect starts. (That way they know to look at, say, kidney function, if that’s what killed the first batch of subjects, rather than having to test all the different body systems to find out what’s going on.) All the first test tells you is whether that kind of overdose is possible at all.
Just because it doesn’t answer every question you’d like it to doesn’t mean that it doesn’t answer any questions.
Do you understand that the situation of “someone using his intelligence to try to fool you” and the situation “living life” are different? Studies about the former do not give results about the latter. The only valid conclusion from this study is “people can sometimes be fooled, on purpose”. But that isn’t the conclusion it claims to support. Being tricked by people intentionally is different than being inherently biased.
The point is “people can be fooled into making this specific mistake”, which is an indication that that specific mistake is one that people do make in some circumstances, rather than that specific mistake not being made at all. (As a counterexample, I imagine that it would be rather hard to trick someone into claiming that if you put two paperclips in a bowl, and then added two more paperclips to that bowl, and then counted the paperclips, you’d find three—that’s a mistake that people don’t make, even if someone tries to trick them.)
“Some circumstances” might only be “when someone is trying to trick them”, but even if that’s true (which the experiment with the dice suggests it’s not) that’s not as far removed from “living life” as you’re trying to claim—people do try to trick each other in real life, and it’s not too unusual to encounter other situations that are just as tricky as dealing with a malicious human.
This is exhausting. Whatever this “real life” of yours is, it’s so boring, predictable, and uniform as to be not worth living. Who hasn’t had to adapt to new (yes, even “artificial”) situations before? Who hasn’t played a game? Who hasn’t bet on the outcome of an event?
Propose an experiment already. My patience is waning.
EDIT: I have, in fact, read the 1983 paper.
You think I have to propose a better experiment? We should just believe the best experiment anyone did, even if it’s no good?
Yes. If there’s no way to test your claim that systemic pseudoscientific practice brought about the conjunction fallacy, how can we know?
I have something like a thirty percent chance of correctly determining whether or not you will judge a given experimental design to be biased. Therefore I surely can’t do it.
No. Of course not.
Ceterum censeo propose an experiment already.
I don’t know of any experiment that would be any good. I think the experiments of this kind should stop, pending a new idea about how to do fundamentally better ones.
Then we’re in invisible dragon territory. I can safely ignore it.
This is not exactly an uncommon situation in day-to-day life.
Your objection seems to boil down to, “Experimental settings are not identical to actual settings, therefore, everything derived from them is useless.”
It seems pretty unlikely that people will miscalculate probability in an experiment with money on the line, but then always calculate it with perfect accuracy when they are not specifically in an experimental setting.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you’d actually be making a point.
I think, in general, they don’t calculate it at all.
They don’t do it in normal life, and they don’t do it in most of the studies either.
It’s different in that it’s specifically designed to elicit this mistake. It’s designed to trick people. How do I know this? Well apart from the various arguments I’ve given, they said so in their paper.
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life? Really? Or is it a moral condemnation of the researchers. They’re bad people, so we shouldn’t acknowledge their results?
People try to trick each other all the time. People try to trick themselves all the time. People routinely make informal estimates of what’s likely to happen. These experiments all show that under certain circumstances, people will systematically be wrong. Not one thing you’ve said goes to counter this. Your entire argument appears predicated on the confusion of “all” with “some.”
If you succeed at tricking people, you can get them to make mistakes.
What those mistakes are is an interesting issue. There is no argument the mistakes actually have anything to do with the conjunction fallacy. They were simply designed to look like they do.
This pretty much is the conjunction fallacy. Facts presented in certain ways will cause people to systematically make mistakes. If people did not have a bias in this respect, these tricks would not work. It is going to be hard to get people to think, “Billy won the science fair and is captain of the football team” is more likely than each statement separately, because the representativeness heuristic is not implicated.
I have no idea what this means, unless it’s saying, “I’m right and that’s all there is to say.” This is hardly a useful claim. There appears to be rather overwhelming argument on the part of most of the commenters on this forum and the vast majority of psychologists that there is not only such an argument, but that it is compelling.
It is unclear how this relates to grandparent’s point.
No. Absolutely not. First of all, the Soviet experiment was done (and has been replicated in other versions) with educated experts. Second of all, it isn’t very technical at all to ask for a probability. People learn these in grade school. How do you think you can begin to talk to the Bayesians when you think probability estimate is a technical term? They use far more difficult math than that as parts of their basic apparatus. Incidentally, this hypothesis also is ruled out by the follow-up studies which have already been linked to in this thread that show that the conjunction fallacy shows up when people are trying to bet.
I have to wonder if your fanatical approach to Popper is causing problems in which you think or act that you think that any criticism, no matter how weak, or ill-informed allows the rejection of a claim until someone responds to that specific criticism. This is not a healthy attitude if one is interested in an exchange of ideas.
I agree that these studies don’t necessarily show what they intend to. In fact, I could easily imagine giving a higher probability for X&Y than Y myself… if I were asked about Y first, and if X were evidence that Y is probable that I had not considered. (Of course, after the fact I’d adjust my probability for Y).
Then you’re talking about conditional probability (Y | X), rather than joint probability.
It’s quite possible that if you thought the questions were asking about conditional probability, participants in these studies might have, too. Let’s take the Russia question:
Taken more literally, this question is asking about the joint probability: P( invasion & suspension of diplomatic relations)
But in English, the question could be read as: “A Russian invasion of Poland and, given this invasion, a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
in which case participants making that interpretation might give P( suspension of diplomatic relations | invasion )
It’s not at all clear that the participants interpreted these questions the way the experimenters thought.
What I mean is, I might not have any particular notion of what could cause a complete suspension of diplomatic relations, and give it, say, .01 probability. Then, when asked the second question, I might think “Oh! I hadn’t thought of that—it’s actually quite likely (.5) that there’ll be an invasion, and that would be likely (.5) to cause a suspension of diplomatic relations, so A^B has a probability of .25 (actually, slightly more). Of course, this means that B has a probability higher than .25, so if I can go back and change my answer, I will.”
(These numbers are not representative of anything in particular!)
I do agree, however, that the literature overall strongly suggests the veracity of this bias.
In the case of Russia/Poland question the subjects were professional political analysts. For this case in particular we can assume that for these subjects the amount of information about political relations included in the question itself is insignificant.
That’s… somewhat discouraging.
Enough so that I had to triple check the source to be sure I hadn’t got the details wrong.
It certainly contradicts the claim that these studies test artificial judgments that the subjects would never face in day-to-day life.
Eliezer’s post about defying the data (sometimes called ‘defying the evidence’) seems relevant here.