I guess people are upvoting this because they found it useful, but the statement that you don’t need to directly prove induction, but that you can indirectly prove it via proving Occam’s Razor seems kind of obvious and not particularly interesting to me. And it seems to me that you’re reducing it to a harder problem in that resemblance of the past to the present is just one particular way in which a model can be simple. Indeed, you could use the counting argument directly on induction. Anyway, I’ll give this a second read and see if there’s anything I missed.
EDIT: See my second comment, as I didn’t fully understand it after my first read.
Well, I hope this post can be useful as a link you can give to explain the LW community’s mostly shared view about how one resolves the Problem of Induction. I wrote it because I think the LW Sequences’ treatment of the Problem of Induction is uncharacteristically off the mark.
I’m glad you wrote a post about this topic. When I was first reading the sequences, I didn’t find the posts by Eliezer on Induction very satisfying, and it was only after reading Jaynes and a bunch of papers on Solomonoff induction that I felt I had a better understanding of the situation. This post might have sped up that process for me by a day or two, if I had read it a year ago.
There was a little while where I thought Solomonoff Induction was a satisfying solution to the problem of induction. But there doesn’t seem to be any justification for the order over hypotheses in the Solomonoff Prior. Is there discussion/reading about this that I’m missing?
There are several related concepts (mostly from ML) that have caused me a lot of confusion, because of the way they overlap with each other and are often presented separately. These included Occam’s Razor and The Problem of Induction, and also “inductive bias”, “simplicity”, “generalisation”, overfitting, model bias and variance, and the general problem of assigning priors. I’d like there to be a post somewhere explaining the relationships between these words. I might try to write it, but I’m not confident I can make it clear.
Actually, I do appreciate you highlighting this, however, it’s because I think that Eliezer’s solution is somewhat underappreciated, which seems to be the opposite of what you think.
Okay, it’s making a bit more sense now that I’ve reread It’s Not About Past And Future. If you just looked at the position of each particle at time t, we’d all be in different places due to the rotation of the Earth and electrons would be in a different part of their orbit. So we aren’t really making a similarity claim about primitives, but about the higher-level patterns and your claim is that if we admit that the universe follows these patterns then this automatically means that these patterns will apply in the future.
I don’t know. I don’t think we know that the universe follows these patterns as opposed to appearing to follow these patterns. And even if the universe has matched these patterns, it doesn’t mean that it has followed it in terms of these patterns being the causal reason for our observations, as opposed to some more complex pattern that would also explain it.
your claim is that if we admit that the universe follows these patterns then this automatically means that these patterns will apply in the future.
Yeah. My point is that the original statement of the Problem of Induction was naive in two ways:
It invokes “similarity”, “resemblance”, and “collecting a bunch of confirming observations”
It talks about “the future resembling the past”
#1 is the more obviously naive part. #2′s naivety is what I explain in this post’s “Not About Past And Future” section. Once one abandons naive conceptions #1 and #2 by understanding how science actually works, one reduces the Problem of Induction to the more tractable Problem of Occam’s Razor.
I don’t think we know that the universe follows these patterns as opposed to appearing to follow these patterns.
Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.
“Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.”
Not quite—because in order to avoid the problem of induction you need the universe to be following these patterns in the specific sense that these patterns are what is causing what we observed—not just for the universe to appear to follow these patterns.
If we reverse-engineer an accurate compressed model of what the universe appears like to us in the past/present/future, that counts as science.
If you suspect (as I do) that we live in a simulation, then this description applies to all the science we’ve ever done. If you don’t, you can at least imagine that intelligent beings embedded in a simulation that we build can do science to figure out the workings of their simulation, whether or not they also manage to do science on the outer universe.
If we live in a simulation, then it’s likely to be turned off at some point, breaking the induction hypothesis. But then, maybe it doesn’t matter as we wouldn’t be able to observe this.
The problem of induction of is more than one thing, because everything is more than one thing.
The most often discussed version is the epistemic problem, the problem of justifying why you should believe that future patterns will continue. That isn’t much affected by ontologcal issues like whether the universe is simulated. Using probabilistic reasoning , it still makes sense to bet on patterns continuing, mainly because you have no specific information about the alternatives. But you do need to abandon certainty and use probability if ontology can pull the rug from under you.
The ontologcal problem is pretty much equivalent to the problem of the nature of physical law—what makes the future resemble the past? The standard answer , that physical laws are just descriptions, does not work.
Theories of how quarks, electromagnetism and gravity produce planets with intelligent species on them are scientific accomplishments by virtue of the compression they achieve, regardless of why quarks appear to be a thing.
There’s no general agreement on what science is supposed to achieve—specifically, there is an instrumentalism versus realism debate. For realists, it does matter if science fails to discover what’s really real.
I guess people are upvoting this because they found it useful, but the statement that you don’t need to directly prove induction, but that you can indirectly prove it via proving Occam’s Razor seems kind of obvious and not particularly interesting to me. And it seems to me that you’re reducing it to a harder problem in that resemblance of the past to the present is just one particular way in which a model can be simple. Indeed, you could use the counting argument directly on induction. Anyway, I’ll give this a second read and see if there’s anything I missed.
EDIT: See my second comment, as I didn’t fully understand it after my first read.
Well, I hope this post can be useful as a link you can give to explain the LW community’s mostly shared view about how one resolves the Problem of Induction. I wrote it because I think the LW Sequences’ treatment of the Problem of Induction is uncharacteristically off the mark.
I’m glad you wrote a post about this topic. When I was first reading the sequences, I didn’t find the posts by Eliezer on Induction very satisfying, and it was only after reading Jaynes and a bunch of papers on Solomonoff induction that I felt I had a better understanding of the situation. This post might have sped up that process for me by a day or two, if I had read it a year ago.
There was a little while where I thought Solomonoff Induction was a satisfying solution to the problem of induction. But there doesn’t seem to be any justification for the order over hypotheses in the Solomonoff Prior. Is there discussion/reading about this that I’m missing?
There are several related concepts (mostly from ML) that have caused me a lot of confusion, because of the way they overlap with each other and are often presented separately. These included Occam’s Razor and The Problem of Induction, and also “inductive bias”, “simplicity”, “generalisation”, overfitting, model bias and variance, and the general problem of assigning priors. I’d like there to be a post somewhere explaining the relationships between these words. I might try to write it, but I’m not confident I can make it clear.
Actually, I do appreciate you highlighting this, however, it’s because I think that Eliezer’s solution is somewhat underappreciated, which seems to be the opposite of what you think.
Ah yeah. Interesting how all the commenters here are talking about how this topic is quite obvious and settled, yet not saying the same things :)
Okay, it’s making a bit more sense now that I’ve reread It’s Not About Past And Future. If you just looked at the position of each particle at time t, we’d all be in different places due to the rotation of the Earth and electrons would be in a different part of their orbit. So we aren’t really making a similarity claim about primitives, but about the higher-level patterns and your claim is that if we admit that the universe follows these patterns then this automatically means that these patterns will apply in the future.
I don’t know. I don’t think we know that the universe follows these patterns as opposed to appearing to follow these patterns. And even if the universe has matched these patterns, it doesn’t mean that it has followed it in terms of these patterns being the causal reason for our observations, as opposed to some more complex pattern that would also explain it.
Yeah. My point is that the original statement of the Problem of Induction was naive in two ways:
It invokes “similarity”, “resemblance”, and “collecting a bunch of confirming observations”
It talks about “the future resembling the past”
#1 is the more obviously naive part. #2′s naivety is what I explain in this post’s “Not About Past And Future” section. Once one abandons naive conceptions #1 and #2 by understanding how science actually works, one reduces the Problem of Induction to the more tractable Problem of Occam’s Razor.
Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.
“Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.”
Not quite—because in order to avoid the problem of induction you need the universe to be following these patterns in the specific sense that these patterns are what is causing what we observed—not just for the universe to appear to follow these patterns.
If we reverse-engineer an accurate compressed model of what the universe appears like to us in the past/present/future, that counts as science.
If you suspect (as I do) that we live in a simulation, then this description applies to all the science we’ve ever done. If you don’t, you can at least imagine that intelligent beings embedded in a simulation that we build can do science to figure out the workings of their simulation, whether or not they also manage to do science on the outer universe.
If we live in a simulation, then it’s likely to be turned off at some point, breaking the induction hypothesis. But then, maybe it doesn’t matter as we wouldn’t be able to observe this.
The problem of induction of is more than one thing, because everything is more than one thing.
The most often discussed version is the epistemic problem, the problem of justifying why you should believe that future patterns will continue. That isn’t much affected by ontologcal issues like whether the universe is simulated. Using probabilistic reasoning , it still makes sense to bet on patterns continuing, mainly because you have no specific information about the alternatives. But you do need to abandon certainty and use probability if ontology can pull the rug from under you.
The ontologcal problem is pretty much equivalent to the problem of the nature of physical law—what makes the future resemble the past? The standard answer , that physical laws are just descriptions, does not work.
Theories of how quarks, electromagnetism and gravity produce planets with intelligent species on them are scientific accomplishments by virtue of the compression they achieve, regardless of why quarks appear to be a thing.
There’s no general agreement on what science is supposed to achieve—specifically, there is an instrumentalism versus realism debate. For realists, it does matter if science fails to discover what’s really real.