Is there some reason to suspect there isn’t some crazy, gerrymandered orthography such that those facts don’t swamp the priors? Or that, in general, for any two incompatible claims X and Y together with our evidence E, there aren’t two finitely specified orthographies which 1. differ in the relative algorithmic prior probabilities of the translations of X and Y into the orthographies and 2. have this difference survive conditionalizing on E? Because if so, we’re still stuck with a really nasty relativism if Solomonoff is the last word on priors.
Is there some reason to suspect there isn’t some crazy, gerrymandered orthography such that those facts don’t swamp the priors?
There are certainly pathological reference machines—but that is only an issue if people use them.
Because if so, we’re still stuck with a really nasty relativism if Solomonoff is the last word on priors.
Well, I already agreed that Solomonoff induction depends on a choice of language. There are not too many arguments over this, though—people can usually agree on some simple reference machine.
It seems like you’re saying that, pragmatically speaking, it’s not a problem if we all settle on the same set of formalisms. But I don’t see how that’s relevant to my point, which is that there’s no real objective constraints on the formalism we use, and what’s more, any given formalism could lead to virtually any prior between 0 and 1 for any proposition. So, as I said earlier, Solomonoff doesn’t help very much in objectively guiding our priors. We could just dispense with this Solomonoff business entirely and say, “The problem of priors isn’t an issue if we all just arbitrarily choose the same priors!”
Sure there are. Use a sufficiently far out reference machine and things go haywire, and you no-longer get a useful implementaion of Occam’s razor.
Key word there being “useful.” “Useful” doesn’t translate to “objectively correct.” Lots of totally arbitrarily set priors are useful, I’m sure, so if that’s your standard, then this whole discussion is again redundant. Anyway, the fact that Occam’s razor-as-we-intuit-it falls out of one arbitrary configuration of the paramaters (reference machine, language and orthography) of the theory isn’t in itself evidence that the theory is amazingly useful, or even particularly true. It could just be evidence that the theory is particularly vulnerable to gerrymandering, and could theoretically be configured to support virtually anything. There is, I believe, a certain polynomial inequality that characterizes the set of primes. But that turns out not to be so interesting, since every set of integers corresponds to a similar such equation.
Is there some reason to suspect there isn’t some crazy, gerrymandered orthography such that those facts don’t swamp the priors? Or that, in general, for any two incompatible claims X and Y together with our evidence E, there aren’t two finitely specified orthographies which 1. differ in the relative algorithmic prior probabilities of the translations of X and Y into the orthographies and 2. have this difference survive conditionalizing on E? Because if so, we’re still stuck with a really nasty relativism if Solomonoff is the last word on priors.
There are certainly pathological reference machines—but that is only an issue if people use them.
Well, I already agreed that Solomonoff induction depends on a choice of language. There are not too many arguments over this, though—people can usually agree on some simple reference machine.
It seems like you’re saying that, pragmatically speaking, it’s not a problem if we all settle on the same set of formalisms. But I don’t see how that’s relevant to my point, which is that there’s no real objective constraints on the formalism we use, and what’s more, any given formalism could lead to virtually any prior between 0 and 1 for any proposition. So, as I said earlier, Solomonoff doesn’t help very much in objectively guiding our priors. We could just dispense with this Solomonoff business entirely and say, “The problem of priors isn’t an issue if we all just arbitrarily choose the same priors!”
Sure there are. Use a sufficiently far out reference machine and things go haywire, and you no-longer get a useful implementaion of Occam’s razor.
Not really: in many cases, if the proposition and language are selected, everyone agrees on the result.
Solomonoff induction is just a formalisation of Occam’s razor, which IMO, is very useful for selecting priors.
Key word there being “useful.” “Useful” doesn’t translate to “objectively correct.” Lots of totally arbitrarily set priors are useful, I’m sure, so if that’s your standard, then this whole discussion is again redundant. Anyway, the fact that Occam’s razor-as-we-intuit-it falls out of one arbitrary configuration of the paramaters (reference machine, language and orthography) of the theory isn’t in itself evidence that the theory is amazingly useful, or even particularly true. It could just be evidence that the theory is particularly vulnerable to gerrymandering, and could theoretically be configured to support virtually anything. There is, I believe, a certain polynomial inequality that characterizes the set of primes. But that turns out not to be so interesting, since every set of integers corresponds to a similar such equation.