Do people think in a Bayesian or Popperian way?
People think A&B is more likely than A alone, if you ask the right question. That’s not very Bayesian; as far as you Bayesians can tell it’s really quite stupid.
Is that maybe evidence that Bayesianism is faililng to model how people actually thinking?
Popperian philosophy can make sense of this (without hating on everyone! it’s not good to hate on people when there’s better options available). It explains it like this: people like explanations. When you say “A happened because B happened” it sounds to them like a pretty good explanatory theory which makes sense. When you say “A alone” they don’t see any explanation and they read it as “A happened for no apparent reason” which is a bad explanation, so they score it worse.
To concretize this, you could use A = economic collapse and B = nuclear war.
People are looking for good explanations. They are thinking in a Popperian fashion.
Isn’t it weird how you guys talk about all these biases which basically consist of people not thinking in the way you think they should, but when someone says “hey, actually they think in this way Popper worked out” you think that’s crazy cause the Bayesian model must be correct? Why did you find all these counter examples to your own theory and then never notice they mean your theory is wrong? In the cases where people don’t think in a Popperian way, Popper explains why (mostly b/c of the justificationist tradition informing many mistakes since Aristotle)
More examples, from http://wiki.lesswrong.com/wiki/Bias
Scope Insensitivity—The human brain can’t represent large quantities: an environmental measure that will save 200,000 birds doesn’t conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Changing the number does not change most of the explanations involved, such as why helping birds is good, what the person can afford to spare, how much charity it takes the person to feel altruistic enough (or moral enough, involved enough, helpful enough, whatever), etc… Since the major explanatory factors they were considering don’t change in proportion to the number of birds, their answer doesn’t change proportionally either.
Correspondence Bias, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one’s own behavior as the result of circumstance.
This happens because people usually know the explanations/excuses for why they did stuff, but they don’t know them for others. And they have more reason to think of them for themselves.
Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
People do this because of the justificationist tradition, dating back to Aristotle, which Bayesian epistemology is part of, and which Popper rejected. This is a way people really don’t think in the Popperian way—but they could and should.
Planning Fallacy—We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
This is also caused by the justificationist tradition, which Bayesian epistemology is part of. It’s not fallibilist enough. This is a way people really don’t think in the Popperian way—but they could and should.
Well, that’s part of the issue. The other part is they come up with a good explanation of what will happen, and they go with that. That part of their thinking fits what Popper said people do. The problem is not enough criticism, which is from the popularity of justificationism.
Do We Believe Everything We’re Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
That’s very Popperian. The Popperian way is that you can make conjectures however you want, and you only reject them if there’s a criticism. No criticism, no rejection. This contrasts with the justificationist approach in which ideas are required to (impossibly) have positive support, and the focus is on positive support not criticism (thus causing, e.g., Confirmation Bias)
Illusion of Transparency—Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
This one is off topic but there’s several things I wanted to say. First, people don’t always know what their own words mean. People talking about tricky concepts like God, qualia, or consciousness often can’t explain what they mean by the words if asked. Sometimes people even use words without knowing the definition, they just heard it in a similar circumstance another time or something.
The reason others don’t understand us, often, is because of the nature of communication. To communicate what has to happen is the other person creates knoweldge of what idea(s) you are trying to express to him. That means he has to make guesses about what you are saying and use criticisms to improve those guesses (e.g. by ruling stuff out incompatible with the words he heard you use). In this way Popperian epistemology lets us understand communication, and why it’s so hard.
Evaluability—It’s difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
It’s because they are trying to come up with a good explanation of what to buy. And “this one is better than this other one” is a pretty simple and easily available kind of explanation to create.
The Allais Paradox (and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
How do you know that kind of thing and still think people reason in a Bayesian way? They don’t. They just guess at what to gamble, and the quality of the guesses is limited by what criticisms they use. If they dont’ know much math then they don’t subject their guesses to much mathematical criticism. Hence this mistake.
- 3 Dec 2017 23:41 UTC; 15 points) 's comment on Any Good Criticism of Karl Popper’s Epistemology? by (
- 10 Apr 2011 19:14 UTC; 3 points) 's comment on Do people think in a Bayesian or Popperian way? by (
- 10 Apr 2011 20:09 UTC; 3 points) 's comment on Do people think in a Bayesian or Popperian way? by (
Nobody here is claiming that people naturally reason in a Bayesian way.
We are claiming that they should.
This, this, a million times this.
If people don’t reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships). Right?
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
Someone told me that humans do and must think in a bayesian way at some level b/c it’s the only way that works.
As Eliezer said in Searching for Bayes-Structure:
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
I get hungry. So I guess some things I might like to eat. I criticize my guesses. I eat.
benelliott posts on less wrong. I guess what idea he’s trying to communicate. With criticism and further guessing I figure it out. I reply.
Most of this is done subconsciously.
Now how about an example of induction?
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
In the example you give, you don’t give any explanation of what you think it has to do with induction. Do you think it’s inductive because you learned a new idea? Do you think it’s inductive because it’s impossible to conjecture that you should do that next time it rains? Do you think it’s inductive because you learned something from a single instance? (Normally people giving examples of induction will have multiple data points they learn from, not one. Your example is not typical at all.)
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.
There is. That does not mean that it is without error, or that errors are not errors. A&B is, everywhere and always, no more likely than A. Any method of concluding otherwise is wrong. If the form of reasoning that Popper advocates endorses this error, it is wrong.
Whoever that was is wrong.
Eliezer?
Eliezer can say whether curi’s view is a correct reading of that article, but it seems to me that if Bayesian reasoning is the core that works, but humans do a lot of other stuff as well that is all either useless or harmful, and don’t even know the gold from the dross, then this is not in contradiction with demonstrating that the other stuff is due to Popperian reasoning. It rather counts against Popper though. Or at least, Popperianism.
Agreed.
Here’s someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
No doubt Yudkowsky is wrong, as you say.
See my other response to Oscar_Cunningham, who cited the same article.
The core of the problem:
No link to that someone? If you can remember who it was, you should go and argue with them. To everyone else, this is a straw man.
(Certainly there are researchers looking for Bayes structure in low-level neural processing, but those investigations focus on tasks far below human cognition.)
Here’s someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
Some straw man… I thought people would be familiar with this kind of thing without me having to quote it.
Please, stop. This has gone on long enough. You don’t have to respond to everything, and you shouldn’t respond to everything. By trying to do so, you have generated far more text than any reasonable person would be willing to read, and it’s basically just repeating the same incorrect position over and over again. It is quite clear that we are not having a rational discussion, so there is nothing further to say.
Indeed. This Popperclipping of the discussion section should cease.
This situation seems an ideal test of the karma system.
And it works.
What beneficial effect have you observed? I ask because people were complaining about the forum being popperclipped. Do you disagree with these complaints? Or do you think that the karma system has trained the low-karma popperclipping participants to improve the quality of their comments? One of them recently wrote a post admitting and defending the tactic of being obnoxious—he said that his obnoxiousness was to filter out time-wasters.
I mean curi has now insufficient karma to post on the main page and his comments are generally heavily downvoted. People can disable viewing low karma comments, so popperclipping (whatever it means—did the old term “troll” grow out of fashion?) may not be a problem. Therefore I think that karma works.
Curi’s karma periodically spikes despite posting no significantly upvoted comments or any improvement in his reception. I suspect he or someone else who frequents his site may be generating puppet accounts to feed his comments karma (his older comments appear to have gone through periodic blanket spikes.) He’s posted main page and discussion articles multiple times after his karma has dropped to zero without first producing more comments that are upvoted, due to these spikes.
If this is true, it would be natural for the moderators to step in and ban him.
I asked matt if this could be confirmed, but apparently there’s only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.
Jimrandomh had an idea for setting up a script that might help, maybe talk to him? In any event, it might be useful to have the capability to do this in general. That said, since this is only the first time we’ve had such a problem, it doesn’t seem as of right now that this is a common enough issue to really justify investing in additional capabilities for the software.
I believe that “popperclipping” is a play on words, a joke, alluding to a popular LW topic. Explaining it more might kill the joke.
Currently, on the main page, the most recent post under “Recent Posts” is curi’s The Conjunction Fallacy Does Not Exist. The comments under this are showing up in the Recent Comments column. Of the five comments I see in the recent comments column, three are comments under curi’s posts. That is a majority. As of now, then, it appears that curi continues to dominate discussion, either directly or by triggering responses.
Damn, I thought it was in the discussion. Then, I retract my statement that karma works. Still, what’s the explanation? Where did curi get enough karma to balance the blow from his heavily downvoted comments and posts? I have looked onto two pages of his recent activity where his score was −112 (-70 for the main page post, −42 for the rest). And I know he was near zero after his last but one main page post was published.
Maybe mass upvoting by sockpuppets?
Certainly. I only missed the standard name for that behaviour spelled out loud.
Seconded. When I discovered this ongoing conversation on Popperian epistemology, there were already three threads, some of them with hundreds of comments, and no signs of progress and mutual agreement, only argument. There may be some comments worth reading in the stack, but they’re not worth the effort of digging.
While agreeing with you completely, I’ll also point out that quite a few people have been feeding this particular set of threads… that is, continuing to have, at enormous length, a discussion in which no progress is being made.
Others have already answered this, but there’s another problem: you clearly haven’t read the actual literature on the conjunction fallacy. It doesn’t just occur in the form “A because of B.” It connects with the representative heuristic. Thus, for suitably chosen A and B, people act like “A and B” is more likely than “A”. See Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Tversky, Amos; Kahneman, Daniel Psychological Review, Vol 90(4), Oct 1983, 293-315. doi: 10.1037/0033-295X.90.4.293
Please stop posting and read the literature on these issues.
With the Allais Paradox, would you say that the decisions people make are consistent with Popperian philosophy? Or at any rate would you say that, as a Popperian, you would make similar decisions?
Are you implying human thinking should be used as some sort of benchmark? Why in the space of all possible thought processes would the human family of thought processes, hacked together by evolution to work just barely well enough, represent the ideal? Also, are you applying the ‘popperian’ label to human thinking? If I prove human thinking to be wrong by its own standards, have I falsified the popperian process of approaching truth?
I am not well versed (or much invested) in bayes but this is not making much sense.
To clarify/rephrase/expand on this, i think Alexandros is suggesting that questions “how do humans think”, and “what is a rational way to think” are separate questions, and if we are discussing the first of these two questions then perhaps we have been sidetracked.
In fact, this is nicely highlighted by your very first sentence:
That is a quite stupid way to think, and if we want to think rationally we should desire to not think that way, regardless of whether it is in fact a common way of thinking.
No. What?
I think you should read up on the conjunction fallacy. Your example does not address the observations made in research by Kahneman and Tversky. The questions posed in the research do not assume causal relationships, they are just two independent probabilities. I won’t rewrite the whole wiki article, but the upshot of the conjunction fallacy is that people using representativeness heuristic to asses odds, instead of using the correct procedures they would have used if that heuristic isn’t cued. People who would never say “Joe rolled a six and a two” is more likely than “Joe rolled a two” do say “Joe is a New Yorker who rides the subway” is more likely than “Joe is a New Yorker”, when presented with information about Joe.