What would it even mean for there to be a universal prior?
I have a post that may better explain what I am looking for.
You only exist in this one universe. How good a prior is, is simply how much probability it assigns to this universe.
This seems to fall under position 1 or 2 in my post. Currently my credence is mostly distributed between positions 3 and 4 in that post. Reading it may give you a better idea of where I’m coming from.
Position 1 or 2 is correct. 3 isn’t coherent; what is “reality fluid” and how can things be more “real” than other things. Where do subjective beliefs come from in this model? 4 has nothing to do with probability theory. Values and utility functions don’t enter into it. Probability theory is about making predictions and doing statistics, not how much you care about different worlds which may or may not actually exist.
I interpret probability as expectation. I want to make predictions about things. I want to maximize the probability I assign to the correct outcomes. If I multiply all the predictions I ever made together, I want that number to be as high as possible (predictions of the correct outcome, that is.) That would the probability I gave to the world. Or at least my observations of it.
So then it doesn’t really matter what the numbers represent. Just that I want them to be as high as possible. When I make decisions based on the numbers using some decision theory/algorithm and utility function, the higher the numbers are, the better my results will be.
I’m reminded of someone’s attempt to explain probability without using words like “likely”, “certain” or “frequency”, etc. It was basically an impossible task. If I was going to attempt that, I would say something like the previous two paragraphs. Saying things like “weights”, “reality fluid”, “measure”, “possible world”, etc, just pushes the meaning elsewhere.
In any case, all of your definitions should be mathematically equivalent. They might have philosophical implications, but they should all produce the same results on any real world problems. Or at least I think they should. You aren’t disputing Bayes theorem or standard probability theory or anything?
In that case the choice of prior should have the same consequences. And you still want to choose the prior that you think will assign the actual outcome the highest probability.
I have a post that may better explain what I am looking for.
This seems to fall under position 1 or 2 in my post. Currently my credence is mostly distributed between positions 3 and 4 in that post. Reading it may give you a better idea of where I’m coming from.
Position 1 or 2 is correct. 3 isn’t coherent; what is “reality fluid” and how can things be more “real” than other things. Where do subjective beliefs come from in this model? 4 has nothing to do with probability theory. Values and utility functions don’t enter into it. Probability theory is about making predictions and doing statistics, not how much you care about different worlds which may or may not actually exist.
I interpret probability as expectation. I want to make predictions about things. I want to maximize the probability I assign to the correct outcomes. If I multiply all the predictions I ever made together, I want that number to be as high as possible (predictions of the correct outcome, that is.) That would the probability I gave to the world. Or at least my observations of it.
So then it doesn’t really matter what the numbers represent. Just that I want them to be as high as possible. When I make decisions based on the numbers using some decision theory/algorithm and utility function, the higher the numbers are, the better my results will be.
I’m reminded of someone’s attempt to explain probability without using words like “likely”, “certain” or “frequency”, etc. It was basically an impossible task. If I was going to attempt that, I would say something like the previous two paragraphs. Saying things like “weights”, “reality fluid”, “measure”, “possible world”, etc, just pushes the meaning elsewhere.
In any case, all of your definitions should be mathematically equivalent. They might have philosophical implications, but they should all produce the same results on any real world problems. Or at least I think they should. You aren’t disputing Bayes theorem or standard probability theory or anything?
In that case the choice of prior should have the same consequences. And you still want to choose the prior that you think will assign the actual outcome the highest probability.