That’s because each additional level of complexity is adding another probability that needs to be multiplied together. It’s a simple result of probability.
One problem with that explanation is that it does not reference the current universe at all. It implies that the Occamian prior should work well in any universe where the laws of probability hold. Is that really true? Note, on the other hand, that in Lazy prior, the measure of “easiness” is very much based in how this universe works and what state it is in.
I believe that the Occamian prior should hold true in any universe where the laws of probability hold. I don’t see any reason why not, since the assumption behind it is that all the individual levels of complexity of different models have roughly the same probability.
I suspect that to you “Occam’s Razor” refers to this law (I don’t think that’s the usual interpretation, but it’s reasonable). However this law does not make a prior. It does not say anything about whether we should prefer a 6-state Turing machine to a 100-state TM, when building a model. Try using the laws of probability to decide that.
the Occamian prior should hold true
Priors don’t “hold true”, that’s a type error (or at least bad wording).
Not at all. I’m repeating a truthism: to make a claim about the territory, you should look at the territory. “Occamian prior works well” is an empirical claim about the real world (though it’s not easy to measure). “Probabilities need to be multiplied” is a lot less empirical (it’s about as empirical as 2+2=4). Therefore the former shouldn’t follow from the latter.
One problem with that explanation is that it does not reference the current universe at all. It implies that the Occamian prior should work well in any universe where the laws of probability hold. Is that really true? Note, on the other hand, that in Lazy prior, the measure of “easiness” is very much based in how this universe works and what state it is in.
I believe that the Occamian prior should hold true in any universe where the laws of probability hold. I don’t see any reason why not, since the assumption behind it is that all the individual levels of complexity of different models have roughly the same probability.
Laws of probability say that
I suspect that to you “Occam’s Razor” refers to this law (I don’t think that’s the usual interpretation, but it’s reasonable). However this law does not make a prior. It does not say anything about whether we should prefer a 6-state Turing machine to a 100-state TM, when building a model. Try using the laws of probability to decide that.
Priors don’t “hold true”, that’s a type error (or at least bad wording).
That is indeed what it means in my mind.
I agree that it was bad wording. Perhaps something more along the lines of “should work well.”
Just to clarify, are you referring to the differences between classical probability and quantum amplitudes? Or do you mean something else?
Not at all. I’m repeating a truthism: to make a claim about the territory, you should look at the territory. “Occamian prior works well” is an empirical claim about the real world (though it’s not easy to measure). “Probabilities need to be multiplied” is a lot less empirical (it’s about as empirical as 2+2=4). Therefore the former shouldn’t follow from the latter.