“And if the event happens even more when you expect it to then
it is even more evidence for the theory, ”
I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!
If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what’s best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference to increase the confidence he has in his theory about the beneficence of his master. As more food is provided, exactly according to his expectations, he concludes that his theory is becoming more and more likely to be true. Towards the end of November, he considers his theory very true indeed.
You can guess the rest of the story. Turkeys are eaten at Thanksgiving. The turkey was killed.
I think you can see that probabilistic evidence, or any evidence, does not (can not) logically support a theory. It merely corroborates it. One can not infer from an example of something, a general rule. Exactly the opposite is the case. One cannot infer that because food is provided each day, that it will continue to be provided each day. Examples of food being provided do not increase the likelihood that the theory is true. But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
I can summarize like this. Outcomes of probabilistic experiments do not tell us what it is rational to believe, any more than the turkey was justified in believing in the beneficence of his owner because he kept getting food in November. Probability does not help us develop rational expectations. Rational expectations, on the other hand, do help us to determine what is probable. When the turkey has a rational theory, he can determine the likelihood that he will or will not be given food on a given day.
A perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn’t favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction.
But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn’t even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis.
Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.
“What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something).”
I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law of mathematics or logic that says that when a sequence of 100 zeroes in a row are observed, the next one is more likely to be another zero. Indeed there are a literal INFINITE number of hypotheses that are consistent with 100 zero’s coming first and then anything else coming next.
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc. That is why we think Thanksgiving will come again. It is your understanding of our culture that allows you to make predictions about Thanksgiving, not the fact that it has happened for! For example, you didn’t keep writing the year 19XX, just because most of your life you did so and did so repeatedly. You were not fooled by an imaginary principle of induction when the calendar turned from 1999 to 2000. You did not keep writing 19...something, just because you had written it before. You understood the calendar, just as you understand our culture and have deep theories about it. That is why you make certain predictions (Thankgiving will keep coming but you won’t continue to write 19XX, no matter how many times you wrote it in the past.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past
Of course not. Though I’m pretty sure induction occurs in humans without them willing it. This is just Hume’s view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we’re doing when we do science. If we can’t trust it we can’t trust science
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc.
I’m sorry, my “a priori” theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn’t an analytic truth and it isn’t anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren’t allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I’m explaining how those theories got here in the first place.
I’m afraid I don’t know what to make of your calendar and number examples. Just because I think science is about induction doesn’t mean I don’t think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren’t great Bayesians and just accept what they are told as true. But the fact that people aren’t actually naturally perfect scientists isn’t relevant.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a “theory” wouldn’t even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can’t do that absent regularities.)
“And if the event happens even more when you expect it to then
it is even more evidence for the theory, ”
I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!
If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what’s best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference to increase the confidence he has in his theory about the beneficence of his master. As more food is provided, exactly according to his expectations, he concludes that his theory is becoming more and more likely to be true. Towards the end of November, he considers his theory very true indeed.
You can guess the rest of the story. Turkeys are eaten at Thanksgiving. The turkey was killed.
I think you can see that probabilistic evidence, or any evidence, does not (can not) logically support a theory. It merely corroborates it. One can not infer from an example of something, a general rule. Exactly the opposite is the case. One cannot infer that because food is provided each day, that it will continue to be provided each day. Examples of food being provided do not increase the likelihood that the theory is true. But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
I can summarize like this. Outcomes of probabilistic experiments do not tell us what it is rational to believe, any more than the turkey was justified in believing in the beneficence of his owner because he kept getting food in November. Probability does not help us develop rational expectations. Rational expectations, on the other hand, do help us to determine what is probable. When the turkey has a rational theory, he can determine the likelihood that he will or will not be given food on a given day.
A perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn’t favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction.
But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn’t even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis.
Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.
“What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something).”
I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law of mathematics or logic that says that when a sequence of 100 zeroes in a row are observed, the next one is more likely to be another zero. Indeed there are a literal INFINITE number of hypotheses that are consistent with 100 zero’s coming first and then anything else coming next.
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc. That is why we think Thanksgiving will come again. It is your understanding of our culture that allows you to make predictions about Thanksgiving, not the fact that it has happened for! For example, you didn’t keep writing the year 19XX, just because most of your life you did so and did so repeatedly. You were not fooled by an imaginary principle of induction when the calendar turned from 1999 to 2000. You did not keep writing 19...something, just because you had written it before. You understood the calendar, just as you understand our culture and have deep theories about it. That is why you make certain predictions (Thankgiving will keep coming but you won’t continue to write 19XX, no matter how many times you wrote it in the past.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
Of course not. Though I’m pretty sure induction occurs in humans without them willing it. This is just Hume’s view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we’re doing when we do science. If we can’t trust it we can’t trust science
I’m sorry, my “a priori” theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn’t an analytic truth and it isn’t anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren’t allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I’m explaining how those theories got here in the first place.
I’m afraid I don’t know what to make of your calendar and number examples. Just because I think science is about induction doesn’t mean I don’t think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren’t great Bayesians and just accept what they are told as true. But the fact that people aren’t actually naturally perfect scientists isn’t relevant.
Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a “theory” wouldn’t even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can’t do that absent regularities.)