Suppose we somehow do this study, and we find that N% of the time the “simplest possible fit given the known facts” is true, and (1-N)% of the time it isn’t. For what range of Ns would you conclude that Occam’s Razor is correct, and for what range of Ns would you conclude that your alternative hypothesis is instead correct?
I will admit that I’m struggling a bit here, because I’m having trouble coming up with a coherent mental picture of what a legitimate alternate hypothesis to Occam’s razor would actually look like.
In fact, if you take my hypothesis to be true, then Occam’s razor would still fundamentally hold, at least in the simplest form of “a less complicated theory is more likely to be true then a more complicated”, since if “theory-space A” is smaller then “theory-space B”, then any given point in theory-space A is more likely to be true then any given point in theory-space B even if the answer has an equal chance of being in space A as it does of being in space B. So I think my original hypothesis actually itself reduces to Occam’s Razor.
I think this is where I just say oops and drop this whole train of thought.
Here’s one. The universe is a particularly perverse simulation, largely controlled by a sequence of pseudorandom number generators. This sequence of PRNGs gets steadily more and more Kolmogorov-complicated (the superbeings that run us love complicated forms of torture), so even if we figured out how a given one worked the next one would already be in play, and it is totally unrelated, so we’d have to start all over. Occam’s razor fails badly in such a universe because the explanation for any particular thing happening gets more complicated over time.
I guess we could test this one by looking at successful explanations over time and seeing whether their complexity increases at a steady rate? Then again, I can already find two or three holes in that test...
So I think my original hypothesis actually itself reduces to Occam’s Razor.
Yeah, that’s what I think too.
Presumably, what I’d expect to see if Occam’s Razor is an unreliable guideline is that when I’m choosing between two explanations, one of which is more complex for a consistent and coherent definition of complexity, it turns out that simpler explanation is often incorrect.
Sorry, I still don’t get it.
Suppose we somehow do this study, and we find that N% of the time the “simplest possible fit given the known facts” is true, and (1-N)% of the time it isn’t. For what range of Ns would you conclude that Occam’s Razor is correct, and for what range of Ns would you conclude that your alternative hypothesis is instead correct?
I will admit that I’m struggling a bit here, because I’m having trouble coming up with a coherent mental picture of what a legitimate alternate hypothesis to Occam’s razor would actually look like.
In fact, if you take my hypothesis to be true, then Occam’s razor would still fundamentally hold, at least in the simplest form of “a less complicated theory is more likely to be true then a more complicated”, since if “theory-space A” is smaller then “theory-space B”, then any given point in theory-space A is more likely to be true then any given point in theory-space B even if the answer has an equal chance of being in space A as it does of being in space B. So I think my original hypothesis actually itself reduces to Occam’s Razor.
I think this is where I just say oops and drop this whole train of thought.
Here’s one. The universe is a particularly perverse simulation, largely controlled by a sequence of pseudorandom number generators. This sequence of PRNGs gets steadily more and more Kolmogorov-complicated (the superbeings that run us love complicated forms of torture), so even if we figured out how a given one worked the next one would already be in play, and it is totally unrelated, so we’d have to start all over. Occam’s razor fails badly in such a universe because the explanation for any particular thing happening gets more complicated over time.
In other words, Quirrell-whistling writ large.
I guess we could test this one by looking at successful explanations over time and seeing whether their complexity increases at a steady rate? Then again, I can already find two or three holes in that test...
Hmm. This is a tricky one.
Yeah, that’s what I think too.
Presumably, what I’d expect to see if Occam’s Razor is an unreliable guideline is that when I’m choosing between two explanations, one of which is more complex for a consistent and coherent definition of complexity, it turns out that simpler explanation is often incorrect.