I could a book and find that the arguments in the book are “valid”—that it is impossible, or at least unlikely, that the premises are true and the conclusion false. However, what I can’t do by reading is determine if the premises are true.
In the infamous Alien Autopsy “documentary”, there were three specific claims made for the authenticity of the video.
1) An expert from Kodak examined the film, and verified that it is as old as was claimed.
2) A pathologist was interviewed, who said that the autopsy portrayed was done in the manner that an actual autopsy would have been done.
3) An expert from Spielberg’s movie studio testified that modern special effects could not duplicate the scenes in the video.
If you accept these statements as true, it becomes reasonable to accept that the footage was actually showing what it appeared to show; an autopsy of dead aliens.
Upon seeing these claims, though, my response was along the lines of “I defy the data.” As it turns out, all three of those statements were blatant lies. There was no expert from Kodak who verified the film. Kodak offered to verify the film, but was denied access. Many other pathologists said that the way the autopsy was performed in the film was absurd, and that no competent pathologist would ever do an autopsy on an unknown organism in that manner because it would be completely useless. The person from Spielberg’s movie studio was selectively quoted and was very angry about it. What he really said that the film was good for whatever grade B studio happened to have produced it.
I could read your book, but I believe that it is more likely that the statements in the book are wrong than it is that psi exists. As Thomas Jefferson did not say, “It is easier to believe that two Yankee professors [Profs. Silliman and Kingsley of Yale] would lie than that stones would fall from the sky.”
The burden of proof is on you, Matthew. Many, many claims of the existence of “psi” have been shown to be bogus, so I give further claims of that nature very little credence. Either tell us about a repeatable experiment—copy a few paragraphs from that book if you have to—or we’re going to ignore you.
Although I also think Psi is bogus, my belief has nothing to do with the fact that previous claims of psi have been bogus. Evidence can never justify a theory, any more than finding 10 white swans in a row proves that there are no black swans! Believing that psi is false because of evidence that psi has been false in the past is the logical fallacy of inductivism. Most rational people do not believe in Psi because it has no logical theoretical/scientific basis and because it does not explain things well.
Much of this type of argument strikes me as nonsense. Something that is true can not be justified. One can (and should) argue that something is true. But argument is not justification. If the argument explains something well, then one should believe it, if it is the best theory available.
But evidence can never support any argument. It merely corroborates it. The reason that you believe a coin is fair is not ultimately because the results of an experiment convince you. It would be easy to set up an algorithm that causes the first 3000 examples of a computer simulated coin-flip to have the correct number of heads or tails to make the uninformed believe that the simulated coin flip is fair. But the next 10,000 could yield very different results, just by using an easy-to-create mathematical algorithm. No p-value can be assigned even after 3000 computer simulations of a coin flip. The data never tell a story (to quote someone on another site).
The reason we rationally believe the results of experiment when we flip the coin, but not when we see an apparent computer simulation of a coin flip is:
In the case of the actual coin we already have explanations of the effects of gravity on two-sided metal objects, well before we have any data about coin flips. The same is not true about the computer simulation of the coin flip, unless we see the program ahead of time.
It is the theory about the effects of gravity on two-sided metal objects (with a particular pattern of metal distribution) that we try to evaluate when we flip coins. The data never tell us a story about whether the coin is fair. We first have a theory about the coin and its properties and then we utilize the experiment (the coin flip) to try to falsify our notion that the coin is fair if the coin looks balanced. Or, we falsify the notion that the coin is not fair, if our initial theory is that the coin does not look balanced. Examples of a phenomena do not increase the probability of it being true.
The reason we may believe that a coin could be fair is that we first evaluate the structure of the material, note that it seems to have a structure that would promote fairness given standard human flips of coins. Only then do we test it. But it is our rational understanding of the properties of the coin and expectations about the environment which make the coin flip reasonable. The results of any test tell you nothing (logically, nothing at all) about the fairness of a coin unless you first have a theory and an explanation about why the coin should or should not be considered fair.
The reason we do not believe in psi is that it does not explain anything, violates multiple known laws of physics, yet creates no alternative scientific structure that allows us to understand and predict events in our world.
This is pretty muddled and wrong. You use a lot of terms in an unorthodox way. For example I don’t know how something that is true cannot ever be justified (how else do you know it’s true!). Also, there is no such thing as science without induction, no laws of physics or predictions. So I’m pretty confused about what your position is. That’s okay though because it looks like you’ve never heard of Bayesian inference. In which case this is a really important day in your life.
“For example I don’t know how something that is true cannot ever be justified (how else do you know it’s true!”
You can’t know that something is true. We are fallible. And our best theories are often wrong. We gain knowledge by arguing with each other and trying to point out logical contradictions in our explanations. Experiments can help us to show that competing explanations are wrong (or that ours is!) .
Induction as a scientific methodology has been known (since Hume) to be impossible. Happy to discuss this further if you like. I will certainly read the articles you suggest. Please consider reading David Deutsch’s, The Fabric of Realtiy. He (better than Hume in my estimation) shows the ’complete irrationality of induction, but I am happy to discuss, if you are interested.
Induction as a scientific methodology has been known (since Hume) to be impossible.
I agree with Hume about just about everything. You’re misreading him. Induction definitely isn’t impossible. We do it all the time. Scientists do it for a living. Hume certainly didn’t think it was impossible. What he thought was that there was no deductive reason for expecting that today will be like yesterday. They only justification is induction itself. Thus, any inductive argument begs the question. But his solution definitely wasn’t to throw it out and wallow in extreme skepticism. He thought induction was inevitable (not even something we will, just part of psychological habit formation) and was pretty much the only way of having knowledge about anything.
Hume’s position is basically my position. Though I have some sketchy arguments in my head that might let us go farther than Hume, I’m more than comfortable with that. Now it turns out that if your psychological habit formation occurs in a certain way (the Bayesian way) you’ll start winning bets against those who form beliefs in different ways. It also lets us do statistical/probabilistic experimentation which would never falsify anything but can provide evidence for and against theories. It also explains why we like unfalsified theories that have been tested many, many times more than unfalsified theories that have rarely been tested.
If Deutsch has other arguments you can spell out here I’d be happy to hear them.
This is true if you take “know” to mean “absolute certainty”. And, precisely because absolute certainty never happens, taking “know” in this sense would be pointless. We would never have the opportunity to use such a word, so why bother having it? For that reason, people on this site take the assertion that they “know” a proposition P to mean that the evidence they’ve gathered adds up to a sufficiently high probability for P. Here,
“sufficiently high” depends on the context — for example, the expected cost/benefit of acting as though P is true; and
the evidence that they’ve gathered “adds” in the sense of Bayesian updating.
That’s all that they mean by “know”.
Induction as a scientific methodology has been known (since Hume) to be impossible.
On the Bayesian interpretation, induction is just a certain mathematical computation. The only limits on its possibility are the limits on your ability to carry out the computations.
“evidence they’ve gathered adds up to a sufficiently high probability for P”
Perhaps I should ask what you mean by “evidence”? By evidence do you mean examples of an event happening that corroborates a particular theory that someone holds ?
So if
you have an expectation of something happening,
and
that something happens,
then you are saying that the event is evidence in favor of the theory. And if the event happens even more when you expect it to then
it is even more evidence for the theory, and this increased probability is calculated by using a Bayesian rule to update your increased expectation of the likelihood of the truth of your theory?
All input that you have access to is potentially evidence. That is, ideally, all your input would figure into your evaluation of the probability of any proposition whatsoever. And if some input E weren’t evidence with respect to some particular proposition H, you would still have to run the Bayesian updating computation to determine that E didn’t change the probability that you ought to assign to H.
Obviously, in practice, computing the upshot of all your input is so ideal as to be physically impossible. But, in principle, everything is evidence.
And if the event happens even more when you expect it to then
it is even more evidence for the theory, and this increased probability is calculated by using a Bayesian rule to update your increased expectation of the likelihood of the truth of your theory?
Contradicting prior expectation is a particularly potent kind of evidence. But it is only a special case. Search for “Popper” at Eliezer’s An Intuitive Explanation of Bayes’ Theorem.
“And if the event happens even more when you expect it to then
it is even more evidence for the theory, ”
I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!
If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what’s best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference to increase the confidence he has in his theory about the beneficence of his master. As more food is provided, exactly according to his expectations, he concludes that his theory is becoming more and more likely to be true. Towards the end of November, he considers his theory very true indeed.
You can guess the rest of the story. Turkeys are eaten at Thanksgiving. The turkey was killed.
I think you can see that probabilistic evidence, or any evidence, does not (can not) logically support a theory. It merely corroborates it. One can not infer from an example of something, a general rule. Exactly the opposite is the case. One cannot infer that because food is provided each day, that it will continue to be provided each day. Examples of food being provided do not increase the likelihood that the theory is true. But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
I can summarize like this. Outcomes of probabilistic experiments do not tell us what it is rational to believe, any more than the turkey was justified in believing in the beneficence of his owner because he kept getting food in November. Probability does not help us develop rational expectations. Rational expectations, on the other hand, do help us to determine what is probable. When the turkey has a rational theory, he can determine the likelihood that he will or will not be given food on a given day.
A perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn’t favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction.
But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn’t even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis.
Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.
“What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something).”
I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law of mathematics or logic that says that when a sequence of 100 zeroes in a row are observed, the next one is more likely to be another zero. Indeed there are a literal INFINITE number of hypotheses that are consistent with 100 zero’s coming first and then anything else coming next.
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc. That is why we think Thanksgiving will come again. It is your understanding of our culture that allows you to make predictions about Thanksgiving, not the fact that it has happened for! For example, you didn’t keep writing the year 19XX, just because most of your life you did so and did so repeatedly. You were not fooled by an imaginary principle of induction when the calendar turned from 1999 to 2000. You did not keep writing 19...something, just because you had written it before. You understood the calendar, just as you understand our culture and have deep theories about it. That is why you make certain predictions (Thankgiving will keep coming but you won’t continue to write 19XX, no matter how many times you wrote it in the past.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past
Of course not. Though I’m pretty sure induction occurs in humans without them willing it. This is just Hume’s view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we’re doing when we do science. If we can’t trust it we can’t trust science
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc.
I’m sorry, my “a priori” theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn’t an analytic truth and it isn’t anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren’t allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I’m explaining how those theories got here in the first place.
I’m afraid I don’t know what to make of your calendar and number examples. Just because I think science is about induction doesn’t mean I don’t think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren’t great Bayesians and just accept what they are told as true. But the fact that people aren’t actually naturally perfect scientists isn’t relevant.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a “theory” wouldn’t even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can’t do that absent regularities.)
Here’s the thing.
I could a book and find that the arguments in the book are “valid”—that it is impossible, or at least unlikely, that the premises are true and the conclusion false. However, what I can’t do by reading is determine if the premises are true.
In the infamous Alien Autopsy “documentary”, there were three specific claims made for the authenticity of the video.
1) An expert from Kodak examined the film, and verified that it is as old as was claimed. 2) A pathologist was interviewed, who said that the autopsy portrayed was done in the manner that an actual autopsy would have been done. 3) An expert from Spielberg’s movie studio testified that modern special effects could not duplicate the scenes in the video.
If you accept these statements as true, it becomes reasonable to accept that the footage was actually showing what it appeared to show; an autopsy of dead aliens.
Upon seeing these claims, though, my response was along the lines of “I defy the data.” As it turns out, all three of those statements were blatant lies. There was no expert from Kodak who verified the film. Kodak offered to verify the film, but was denied access. Many other pathologists said that the way the autopsy was performed in the film was absurd, and that no competent pathologist would ever do an autopsy on an unknown organism in that manner because it would be completely useless. The person from Spielberg’s movie studio was selectively quoted and was very angry about it. What he really said that the film was good for whatever grade B studio happened to have produced it.
I could read your book, but I believe that it is more likely that the statements in the book are wrong than it is that psi exists. As Thomas Jefferson did not say, “It is easier to believe that two Yankee professors [Profs. Silliman and Kingsley of Yale] would lie than that stones would fall from the sky.”
The burden of proof is on you, Matthew. Many, many claims of the existence of “psi” have been shown to be bogus, so I give further claims of that nature very little credence. Either tell us about a repeatable experiment—copy a few paragraphs from that book if you have to—or we’re going to ignore you.
Although I also think Psi is bogus, my belief has nothing to do with the fact that previous claims of psi have been bogus. Evidence can never justify a theory, any more than finding 10 white swans in a row proves that there are no black swans! Believing that psi is false because of evidence that psi has been false in the past is the logical fallacy of inductivism. Most rational people do not believe in Psi because it has no logical theoretical/scientific basis and because it does not explain things well.
Much of this type of argument strikes me as nonsense. Something that is true can not be justified. One can (and should) argue that something is true. But argument is not justification. If the argument explains something well, then one should believe it, if it is the best theory available.
But evidence can never support any argument. It merely corroborates it. The reason that you believe a coin is fair is not ultimately because the results of an experiment convince you. It would be easy to set up an algorithm that causes the first 3000 examples of a computer simulated coin-flip to have the correct number of heads or tails to make the uninformed believe that the simulated coin flip is fair. But the next 10,000 could yield very different results, just by using an easy-to-create mathematical algorithm. No p-value can be assigned even after 3000 computer simulations of a coin flip. The data never tell a story (to quote someone on another site).
The reason we rationally believe the results of experiment when we flip the coin, but not when we see an apparent computer simulation of a coin flip is: In the case of the actual coin we already have explanations of the effects of gravity on two-sided metal objects, well before we have any data about coin flips. The same is not true about the computer simulation of the coin flip, unless we see the program ahead of time.
It is the theory about the effects of gravity on two-sided metal objects (with a particular pattern of metal distribution) that we try to evaluate when we flip coins. The data never tell us a story about whether the coin is fair. We first have a theory about the coin and its properties and then we utilize the experiment (the coin flip) to try to falsify our notion that the coin is fair if the coin looks balanced. Or, we falsify the notion that the coin is not fair, if our initial theory is that the coin does not look balanced. Examples of a phenomena do not increase the probability of it being true.
The reason we may believe that a coin could be fair is that we first evaluate the structure of the material, note that it seems to have a structure that would promote fairness given standard human flips of coins. Only then do we test it. But it is our rational understanding of the properties of the coin and expectations about the environment which make the coin flip reasonable. The results of any test tell you nothing (logically, nothing at all) about the fairness of a coin unless you first have a theory and an explanation about why the coin should or should not be considered fair.
The reason we do not believe in psi is that it does not explain anything, violates multiple known laws of physics, yet creates no alternative scientific structure that allows us to understand and predict events in our world.
This is pretty muddled and wrong. You use a lot of terms in an unorthodox way. For example I don’t know how something that is true cannot ever be justified (how else do you know it’s true!). Also, there is no such thing as science without induction, no laws of physics or predictions. So I’m pretty confused about what your position is. That’s okay though because it looks like you’ve never heard of Bayesian inference. In which case this is a really important day in your life.
The wikipedia enty
The SEP entry
Eliezer’s explanation of the Math
Also: the “Rationality and Science” subsection at the bottom here.
Who has better links?
Edit: Welcome to less wrong, btw! Feel free to introduce yourself.
Edit again: This PDF looks good.
I wouldn’t call Popperianism unorthodox exactly.
I sort of see some Popper in the comment but I also see a good deal that isn’t.
“For example I don’t know how something that is true cannot ever be justified (how else do you know it’s true!”
You can’t know that something is true. We are fallible. And our best theories are often wrong. We gain knowledge by arguing with each other and trying to point out logical contradictions in our explanations. Experiments can help us to show that competing explanations are wrong (or that ours is!) .
Induction as a scientific methodology has been known (since Hume) to be impossible. Happy to discuss this further if you like. I will certainly read the articles you suggest. Please consider reading David Deutsch’s, The Fabric of Realtiy. He (better than Hume in my estimation) shows the ’complete irrationality of induction, but I am happy to discuss, if you are interested.
I agree with Hume about just about everything. You’re misreading him. Induction definitely isn’t impossible. We do it all the time. Scientists do it for a living. Hume certainly didn’t think it was impossible. What he thought was that there was no deductive reason for expecting that today will be like yesterday. They only justification is induction itself. Thus, any inductive argument begs the question. But his solution definitely wasn’t to throw it out and wallow in extreme skepticism. He thought induction was inevitable (not even something we will, just part of psychological habit formation) and was pretty much the only way of having knowledge about anything.
Hume’s position is basically my position. Though I have some sketchy arguments in my head that might let us go farther than Hume, I’m more than comfortable with that. Now it turns out that if your psychological habit formation occurs in a certain way (the Bayesian way) you’ll start winning bets against those who form beliefs in different ways. It also lets us do statistical/probabilistic experimentation which would never falsify anything but can provide evidence for and against theories. It also explains why we like unfalsified theories that have been tested many, many times more than unfalsified theories that have rarely been tested.
If Deutsch has other arguments you can spell out here I’d be happy to hear them.
This is true if you take “know” to mean “absolute certainty”. And, precisely because absolute certainty never happens, taking “know” in this sense would be pointless. We would never have the opportunity to use such a word, so why bother having it? For that reason, people on this site take the assertion that they “know” a proposition P to mean that the evidence they’ve gathered adds up to a sufficiently high probability for P. Here,
“sufficiently high” depends on the context — for example, the expected cost/benefit of acting as though P is true; and
the evidence that they’ve gathered “adds” in the sense of Bayesian updating.
That’s all that they mean by “know”.
On the Bayesian interpretation, induction is just a certain mathematical computation. The only limits on its possibility are the limits on your ability to carry out the computations.
“evidence they’ve gathered adds up to a sufficiently high probability for P”
Perhaps I should ask what you mean by “evidence”? By evidence do you mean examples of an event happening that corroborates a particular theory that someone holds ?
So if
you have an expectation of something happening, and
that something happens,
then you are saying that the event is evidence in favor of the theory. And if the event happens even more when you expect it to then
it is even more evidence for the theory, and this increased probability is calculated by using a Bayesian rule to update your increased expectation of the likelihood of the truth of your theory?
Have I stated your argument correctly?
All input that you have access to is potentially evidence. That is, ideally, all your input would figure into your evaluation of the probability of any proposition whatsoever. And if some input E weren’t evidence with respect to some particular proposition H, you would still have to run the Bayesian updating computation to determine that E didn’t change the probability that you ought to assign to H.
Obviously, in practice, computing the upshot of all your input is so ideal as to be physically impossible. But, in principle, everything is evidence.
Contradicting prior expectation is a particularly potent kind of evidence. But it is only a special case. Search for “Popper” at Eliezer’s An Intuitive Explanation of Bayes’ Theorem.
“And if the event happens even more when you expect it to then
it is even more evidence for the theory, ”
I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!
If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what’s best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference to increase the confidence he has in his theory about the beneficence of his master. As more food is provided, exactly according to his expectations, he concludes that his theory is becoming more and more likely to be true. Towards the end of November, he considers his theory very true indeed.
You can guess the rest of the story. Turkeys are eaten at Thanksgiving. The turkey was killed.
I think you can see that probabilistic evidence, or any evidence, does not (can not) logically support a theory. It merely corroborates it. One can not infer from an example of something, a general rule. Exactly the opposite is the case. One cannot infer that because food is provided each day, that it will continue to be provided each day. Examples of food being provided do not increase the likelihood that the theory is true. But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
I can summarize like this. Outcomes of probabilistic experiments do not tell us what it is rational to believe, any more than the turkey was justified in believing in the beneficence of his owner because he kept getting food in November. Probability does not help us develop rational expectations. Rational expectations, on the other hand, do help us to determine what is probable. When the turkey has a rational theory, he can determine the likelihood that he will or will not be given food on a given day.
A perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn’t favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction.
But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn’t even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis.
Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.
“What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something).”
I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law of mathematics or logic that says that when a sequence of 100 zeroes in a row are observed, the next one is more likely to be another zero. Indeed there are a literal INFINITE number of hypotheses that are consistent with 100 zero’s coming first and then anything else coming next.
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc. That is why we think Thanksgiving will come again. It is your understanding of our culture that allows you to make predictions about Thanksgiving, not the fact that it has happened for! For example, you didn’t keep writing the year 19XX, just because most of your life you did so and did so repeatedly. You were not fooled by an imaginary principle of induction when the calendar turned from 1999 to 2000. You did not keep writing 19...something, just because you had written it before. You understood the calendar, just as you understand our culture and have deep theories about it. That is why you make certain predictions (Thankgiving will keep coming but you won’t continue to write 19XX, no matter how many times you wrote it in the past.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
Of course not. Though I’m pretty sure induction occurs in humans without them willing it. This is just Hume’s view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we’re doing when we do science. If we can’t trust it we can’t trust science
I’m sorry, my “a priori” theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn’t an analytic truth and it isn’t anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren’t allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I’m explaining how those theories got here in the first place.
I’m afraid I don’t know what to make of your calendar and number examples. Just because I think science is about induction doesn’t mean I don’t think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren’t great Bayesians and just accept what they are told as true. But the fact that people aren’t actually naturally perfect scientists isn’t relevant.
Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a “theory” wouldn’t even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can’t do that absent regularities.)