Oh, and not to be nitpicky, but if you’re relying on any kind of metric (e.g. your vision) to ascertain that the sun does rise tomorrow, you rely on induction. Without induction, there is simply no way of establishing that your observations correlate with anything meaningful. “The sun will rise tomorrow” cannot actually be confirmed without assuming induction; without the evidence confirming their reliability from past experience, our sensory data are meaningless. This is getting into a nightmarish level of abstraction for a relatively simple point, though.
Believing the sun will rise tomorrow with P=10^-9 is not failing to believe in induction. It’s making a serious mistake, or being privy to some very interesting evidence. Without induction, no probability estimate is possible, because we simply have no idea what will happen.
I suspect this argument stems from different definitions of “induction.”
If you define believing in induction as believing that the future will be like the past, it is possible to believe that the future will not be like the past, and one example of that would be believing that the sun will not rise tomorrow. Similarly, someone could suppose that everything that will happen tomorrow will be totally different from today, and he could still use Bayes’ theorem, if he had any probability estimates at all.
You say, “Without induction, no probability estimate is possible, because we simply have no idea what will happen.” Probability estimates are subjective degrees of belief, and it may be that there is some process like induction that generates them in a person’s mind. But this doesn’t mean that he believes, intellectually, that the future will be like the past, nor that he actually uses this claim in coming up with an estimate; as I just pointed out, some claims explicitly deny that induction will continue to work, and some people sometimes believe them (i.e. “The world will end tomorrow!”)
In any case, it doesn’t matter how a person comes up with his subjective estimates; a prior probability estimate doesn’t need to be justified, it just needs to be used. This post was not intended to justify people’s priors, but the process of induction as an explicit reasoning process—which is not used in generating priors.
I suspect that the argument arises because, deep down, you don’t yet accept that Bayes theorem is more fundamental than induction and that it shows us how to use evidence other than inductive evidence.
That said, you may well be correct in your “nitpick” to the effect that we wouldn’t even be able to interpret sense data as ordinary everyday evidence without induction. That may well be, which would mean that we have to use induction and Bayes theorem at the sense data level before we can use Bayes at the ordinary everyday evidence level. But that does not make induction as fundamental as Bayes.
Since my original point was amended to indicate that my original point about Bayes was overstated, and that the true problem is that Bayes is quite useless without assuming induction is justified (i.e. any observation about the real world or prediction about the future presumes the principle of induction to be justified), I would hardly call this nitpicking. It is my point. Insofar as Bayes’ theorem is purely mathematical, it is quite fundamental. I don’t dispute that. You can’t apply math to the real world without having a real world, and without assuming induction, you can’t really have a concept of a real world.
It has occurred to me that the concept of “induction” upon which I rely may be different in nature from that being used by the people arguing with me. This is unsurprisingly causing problems. Induction, as I mean it, is not simply, “the future will be like the past,” but, “the correlation between past observations and future observations is nonzero.” That is so fundamental I do not think the human mind is capable of not essentially believing it.
If induction means “the correlation between past observations and future observations in nonzero,” then not assuming induction could mean one of two things:
1)I might think there is some chance that the correlation is non-zero, and some chance that the correlation is zero. In this case Bayesian reasoning will still work, and confirms that the correlation is non-zero.
2) I might think the correlation is certainly zero. But in this case most people would not describe this as “not assuming induction”, but as making a completely unjustified and false assumption instead. It is not negative (not assuming) but positive (assuming something.)
A universe in which every kind of past observation is uncorrelated with future observations of the same kind would be a world in which animals could not evolve. Hence, not the kind of universe I would care to (or be able to) contemplate. However, I am quite capable of believing that there are some kinds of observations which do not correlate with future instances of themselves. Random noise exists.
Assuming you have no objection to that, I suppose you can go on preaching that the key mystery of the universe and the basis of all epistemology is induction. I have no objection. But I do think you ought to read Jaynes. Who knows? You might find something there to change your mind or perhaps a clue to dissolving the mystery.
I don’t think induction is of particular importance. We can’t function without assuming its validity. Thus, entertaining the idea that it is invalid is not constructive. I’d be very curious to see someone solve the problem of induction (which I briefly thought this was an attempt at), but it’s hardly an urgent matter.
Picking up on animals not evolving makes about as much sense as picking up on the fact that, if it weren’t for gravity, it would be tough to play badminton. This reinforces my suspicion that our concept of what we’re arguing about is so vastly different that a productive resolution is impossible.
I suppose the origin of this whole digression could be summarized by saying I thought the post was about (the problem of) induction, and was a useless point about a moderately interesting topic. Instead, it’s about (the practice of) induction, making it a decent but not terribly useful point about a rather uninteresting (or at least simple) topic. It is perhaps even less salient than the observation that, if we assume infinite sets of possibilities, then at some point Occam’s razor must work by sheer force of the nature of finite sum infinite sets having to have some arbitrary point after which they decrease.
It is perhaps even less salient than the observation that, if we assume infinite sets of possibilities, then at some point Occam’s razor must work by sheer force of the nature of finite sum infinite sets having to have some arbitrary point after which they decrease.
Oh, and not to be nitpicky, but if you’re relying on any kind of metric (e.g. your vision) to ascertain that the sun does rise tomorrow, you rely on induction. Without induction, there is simply no way of establishing that your observations correlate with anything meaningful. “The sun will rise tomorrow” cannot actually be confirmed without assuming induction; without the evidence confirming their reliability from past experience, our sensory data are meaningless. This is getting into a nightmarish level of abstraction for a relatively simple point, though.
Believing the sun will rise tomorrow with P=10^-9 is not failing to believe in induction. It’s making a serious mistake, or being privy to some very interesting evidence. Without induction, no probability estimate is possible, because we simply have no idea what will happen.
I suspect this argument stems from different definitions of “induction.”
If you define believing in induction as believing that the future will be like the past, it is possible to believe that the future will not be like the past, and one example of that would be believing that the sun will not rise tomorrow. Similarly, someone could suppose that everything that will happen tomorrow will be totally different from today, and he could still use Bayes’ theorem, if he had any probability estimates at all.
You say, “Without induction, no probability estimate is possible, because we simply have no idea what will happen.” Probability estimates are subjective degrees of belief, and it may be that there is some process like induction that generates them in a person’s mind. But this doesn’t mean that he believes, intellectually, that the future will be like the past, nor that he actually uses this claim in coming up with an estimate; as I just pointed out, some claims explicitly deny that induction will continue to work, and some people sometimes believe them (i.e. “The world will end tomorrow!”)
In any case, it doesn’t matter how a person comes up with his subjective estimates; a prior probability estimate doesn’t need to be justified, it just needs to be used. This post was not intended to justify people’s priors, but the process of induction as an explicit reasoning process—which is not used in generating priors.
I suspect that the argument arises because, deep down, you don’t yet accept that Bayes theorem is more fundamental than induction and that it shows us how to use evidence other than inductive evidence.
That said, you may well be correct in your “nitpick” to the effect that we wouldn’t even be able to interpret sense data as ordinary everyday evidence without induction. That may well be, which would mean that we have to use induction and Bayes theorem at the sense data level before we can use Bayes at the ordinary everyday evidence level. But that does not make induction as fundamental as Bayes.
Since my original point was amended to indicate that my original point about Bayes was overstated, and that the true problem is that Bayes is quite useless without assuming induction is justified (i.e. any observation about the real world or prediction about the future presumes the principle of induction to be justified), I would hardly call this nitpicking. It is my point. Insofar as Bayes’ theorem is purely mathematical, it is quite fundamental. I don’t dispute that. You can’t apply math to the real world without having a real world, and without assuming induction, you can’t really have a concept of a real world.
It has occurred to me that the concept of “induction” upon which I rely may be different in nature from that being used by the people arguing with me. This is unsurprisingly causing problems. Induction, as I mean it, is not simply, “the future will be like the past,” but, “the correlation between past observations and future observations is nonzero.” That is so fundamental I do not think the human mind is capable of not essentially believing it.
If induction means “the correlation between past observations and future observations in nonzero,” then not assuming induction could mean one of two things:
1)I might think there is some chance that the correlation is non-zero, and some chance that the correlation is zero. In this case Bayesian reasoning will still work, and confirms that the correlation is non-zero.
2) I might think the correlation is certainly zero. But in this case most people would not describe this as “not assuming induction”, but as making a completely unjustified and false assumption instead. It is not negative (not assuming) but positive (assuming something.)
A universe in which every kind of past observation is uncorrelated with future observations of the same kind would be a world in which animals could not evolve. Hence, not the kind of universe I would care to (or be able to) contemplate. However, I am quite capable of believing that there are some kinds of observations which do not correlate with future instances of themselves. Random noise exists.
Assuming you have no objection to that, I suppose you can go on preaching that the key mystery of the universe and the basis of all epistemology is induction. I have no objection. But I do think you ought to read Jaynes. Who knows? You might find something there to change your mind or perhaps a clue to dissolving the mystery.
[Edit: Removed opening snark.]
I don’t think induction is of particular importance. We can’t function without assuming its validity. Thus, entertaining the idea that it is invalid is not constructive. I’d be very curious to see someone solve the problem of induction (which I briefly thought this was an attempt at), but it’s hardly an urgent matter.
Picking up on animals not evolving makes about as much sense as picking up on the fact that, if it weren’t for gravity, it would be tough to play badminton. This reinforces my suspicion that our concept of what we’re arguing about is so vastly different that a productive resolution is impossible.
I suppose the origin of this whole digression could be summarized by saying I thought the post was about (the problem of) induction, and was a useless point about a moderately interesting topic. Instead, it’s about (the practice of) induction, making it a decent but not terribly useful point about a rather uninteresting (or at least simple) topic. It is perhaps even less salient than the observation that, if we assume infinite sets of possibilities, then at some point Occam’s razor must work by sheer force of the nature of finite sum infinite sets having to have some arbitrary point after which they decrease.
Ouch. Burn