Occam’s razor is dependent on a descriptive language / complexity metric (so there are multiple flavours of the razor).
I think you might be making this sound easier than it is. If there are an infinite number of possible descriptive languages (or of ways of measuring complexity) aren’t there an infinite number of “flavours of the razor”?
Yes, but not all languages are equal—and some are much better than others—so people use the “good” ones on applications which are sensitive to this issue.
There’s a proof that any two (Turing-complete) metrics can only differ by at most a constant amount, which is the message length it takes to encode one metric in the other.
Occam’s razor is dependent on a descriptive language / complexity metric (so there are multiple flavours of the razor).
Unless a complexity metric is specified, the first question seems rather vague.
I think you might be making this sound easier than it is. If there are an infinite number of possible descriptive languages (or of ways of measuring complexity) aren’t there an infinite number of “flavours of the razor”?
Yes, but not all languages are equal—and some are much better than others—so people use the “good” ones on applications which are sensitive to this issue.
There’s a proof that any two (Turing-complete) metrics can only differ by at most a constant amount, which is the message length it takes to encode one metric in the other.
Of course, the constant can be arbitrarily large.
However, there are a number of domains for which this issue is no big deal.
As far as I can tell, this is exactly zero comfort if you have finitely many hypotheses.
This is little comfort if you have finitely many hypotheses — you can still find some encoding to order them in any way you want.