If people don’t reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships). Right?
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
Someone told me that humans do and must think in a bayesian way at some level b/c it’s the only way that works.
The way you begin to grasp the Quest for the Holy Bayes is that you learn about cognitive phenomenon XYZ, which seems really useful—and there’s this bunch of philosophers who’ve been arguing about its true nature for centuries, and they are still arguing—and there’s a bunch of AI scientists trying to make a computer do it, but they can’t agree on the philosophy either -
And—Huh, that’s odd! - this cognitive phenomenon didn’t look anything like Bayesian on the surface, but there’s this non-obvious underlying structure that has a Bayesian interpretation—but wait, there’s still some useful work getting done that can’t be explained in Bayesian terms—no wait, that’s Bayesian too—OH MY GOD this completely different cognitive process, that also didn’t look Bayesian on the surface, ALSO HAS BAYESIAN STRUCTURE—hold on, are these non-Bayesian parts even doing anything?
Yes: Wow, those are Bayesian too!
No: Dear heavens, what a stupid design. I could eat a bucket of amino acids and puke a better brain architecture than that.
Someone told me that humans do and must think in a bayesian way at some level b/c it’s the only way that works.
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I get hungry. So I guess some things I might like to eat. I criticize my guesses. I eat.
benelliott posts on less wrong. I guess what idea he’s trying to communicate. With criticism and further guessing I figure it out. I reply.
Most of this is done subconsciously.
Now how about an example of induction?
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
In the example you give, you don’t give any explanation of what you think it has to do with induction. Do you think it’s inductive because you learned a new idea? Do you think it’s inductive because it’s impossible to conjecture that you should do that next time it rains? Do you think it’s inductive because you learned something from a single instance? (Normally people giving examples of induction will have multiple data points they learn from, not one. Your example is not typical at all.)
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.
If people don’t reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships).
There is. That does not mean that it is without error, or that errors are not errors. A&B is, everywhere and always, no more likely than A. Any method of concluding otherwise is wrong. If the form of reasoning that Popper advocates endorses this error, it is wrong.
Someone told me that humans do and must think in a bayesian way at some level b/c it’s the only way that works.
Eliezer can say whether curi’s view is a correct reading of that article, but it seems to me that if Bayesian reasoning is the core that works, but humans do a lot of other stuff as well that is all either useless or harmful, and don’t even know the gold from the dross, then this is not in contradiction with demonstrating that the other stuff is due to Popperian reasoning. It rather counts against Popper though. Or at least, Popperianism.
(Certainly there are researchers looking for Bayes structure in low-level neural processing, but those investigations focus on tasks far below human cognition.)
Nobody here is claiming that people naturally reason in a Bayesian way.
We are claiming that they should.
This, this, a million times this.
If people don’t reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships). Right?
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
Someone told me that humans do and must think in a bayesian way at some level b/c it’s the only way that works.
As Eliezer said in Searching for Bayes-Structure:
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
I get hungry. So I guess some things I might like to eat. I criticize my guesses. I eat.
benelliott posts on less wrong. I guess what idea he’s trying to communicate. With criticism and further guessing I figure it out. I reply.
Most of this is done subconsciously.
Now how about an example of induction?
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
In the example you give, you don’t give any explanation of what you think it has to do with induction. Do you think it’s inductive because you learned a new idea? Do you think it’s inductive because it’s impossible to conjecture that you should do that next time it rains? Do you think it’s inductive because you learned something from a single instance? (Normally people giving examples of induction will have multiple data points they learn from, not one. Your example is not typical at all.)
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.
There is. That does not mean that it is without error, or that errors are not errors. A&B is, everywhere and always, no more likely than A. Any method of concluding otherwise is wrong. If the form of reasoning that Popper advocates endorses this error, it is wrong.
Whoever that was is wrong.
Eliezer?
Eliezer can say whether curi’s view is a correct reading of that article, but it seems to me that if Bayesian reasoning is the core that works, but humans do a lot of other stuff as well that is all either useless or harmful, and don’t even know the gold from the dross, then this is not in contradiction with demonstrating that the other stuff is due to Popperian reasoning. It rather counts against Popper though. Or at least, Popperianism.
Agreed.
Here’s someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
No doubt Yudkowsky is wrong, as you say.
See my other response to Oscar_Cunningham, who cited the same article.
The core of the problem:
No link to that someone? If you can remember who it was, you should go and argue with them. To everyone else, this is a straw man.
(Certainly there are researchers looking for Bayes structure in low-level neural processing, but those investigations focus on tasks far below human cognition.)
Here’s someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
Some straw man… I thought people would be familiar with this kind of thing without me having to quote it.