Last time I checked, priors were fairly subjective even here. We don’t know what is the best way to assign priors. Things like “Solomonoff induction” depend to arbitrary choice of machine.
Priors are indeed up for grabs, but a set of priors about the universe ought be consistent with itself, no? A set of priors based only on complexity may indeed not be the best set of priors—that’s what all the discussions about “leverage penalties” and the like are about, enhancing Solomonoff induction with something extra. But what you seem to suggest is a set of priors about the universe that are designed for the express purposes of making human utility calculations balance out? Wouldn’t such a set of priors require the anthroporphization of the universe, and effectively mean sacrificing all sense of epistemic rationality?
The best “priors” about the universe are 1 for what that universe right around you is, and 0 for everything else. Other priors are a compromise, an engineering decision.
What I am thinking is that
there is a considerably better way to assign priors which we do not know of yet—the way which will assign equal probabilities to each side of a die if it has no reason to prefer one over the other—the way that does correspond to symmetries in the evidence.
We don’t know that there will still be same problem when we have a non-stupid way to assign priors (especially as the non-stupid way ought to be considerably more symmetric). And it may be that some value systems are intrinsically incoherent. Suppose you wanted to maximize blerg without knowing what blerg even really is. That wouldn’t be possible, you can’t maximize something without having a measure of it. But I still can tell you i’d give you 3^^^^3 blergs for a dollar, without either of us knowing what blerg is supposed to be or whenever 3^^^^3 blergs even make sense (if blerg is an unique good book of up to 1000 page length, it doesn’t because duplicates aren’t blerg).
Priors are indeed up for grabs, but a set of priors about the universe ought be consistent with itself, no? A set of priors based only on complexity may indeed not be the best set of priors—that’s what all the discussions about “leverage penalties” and the like are about, enhancing Solomonoff induction with something extra. But what you seem to suggest is a set of priors about the universe that are designed for the express purposes of making human utility calculations balance out? Wouldn’t such a set of priors require the anthroporphization of the universe, and effectively mean sacrificing all sense of epistemic rationality?
The best “priors” about the universe are 1 for what that universe right around you is, and 0 for everything else. Other priors are a compromise, an engineering decision.
What I am thinking is that
there is a considerably better way to assign priors which we do not know of yet—the way which will assign equal probabilities to each side of a die if it has no reason to prefer one over the other—the way that does correspond to symmetries in the evidence.
We don’t know that there will still be same problem when we have a non-stupid way to assign priors (especially as the non-stupid way ought to be considerably more symmetric). And it may be that some value systems are intrinsically incoherent. Suppose you wanted to maximize blerg without knowing what blerg even really is. That wouldn’t be possible, you can’t maximize something without having a measure of it. But I still can tell you i’d give you 3^^^^3 blergs for a dollar, without either of us knowing what blerg is supposed to be or whenever 3^^^^3 blergs even make sense (if blerg is an unique good book of up to 1000 page length, it doesn’t because duplicates aren’t blerg).