Someone told me that humans do and must think in a bayesian way at some level b/c it’s the only way that works.
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I get hungry. So I guess some things I might like to eat. I criticize my guesses. I eat.
benelliott posts on less wrong. I guess what idea he’s trying to communicate. With criticism and further guessing I figure it out. I reply.
Most of this is done subconsciously.
Now how about an example of induction?
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
In the example you give, you don’t give any explanation of what you think it has to do with induction. Do you think it’s inductive because you learned a new idea? Do you think it’s inductive because it’s impossible to conjecture that you should do that next time it rains? Do you think it’s inductive because you learned something from a single instance? (Normally people giving examples of induction will have multiple data points they learn from, not one. Your example is not typical at all.)
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
I get hungry. So I guess some things I might like to eat. I criticize my guesses. I eat.
benelliott posts on less wrong. I guess what idea he’s trying to communicate. With criticism and further guessing I figure it out. I reply.
Most of this is done subconsciously.
Now how about an example of induction?
In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
In the example you give, you don’t give any explanation of what you think it has to do with induction. Do you think it’s inductive because you learned a new idea? Do you think it’s inductive because it’s impossible to conjecture that you should do that next time it rains? Do you think it’s inductive because you learned something from a single instance? (Normally people giving examples of induction will have multiple data points they learn from, not one. Your example is not typical at all.)
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.