Added an example of when it isn’t possible to specify arbitrary preference for a given prior, and a philosophical note at the end (related to the “where do the priors come from” debate).
What is usually called “prior” is represented by measure P in the post. Together with “shouldness” Q they constitute the recipe for computing preference over events, through expected utility.
If it’s not possible to choose prior more or less arbitrarily and then fill in the gaps using utility to get the correct preference, some priors are inherently incorrect for human preference, and finding the priors that admit completion to the correct preference with fitting utility requires knowledge about preference.
Regarding your second point; I’m not sure how it’s rational to choose your beliefs because of some subjective preference order.
Perhaps you could suggest a case where it makes sense to reason from preferences to “priors which make my preferences consistent”, because I’m also fuzzy on the details of when and how you propose to do so.
I see—by “prior” you mean “current estimate of probability”, because P was defined
I’ve been dealing lately with learning research where “prior” means how likely a given model of probability(outcome) is before any evidence, so maybe I was a little rigid.
In any case, I suggest you consistently use “probability” and drop “prior”.
Added an example of when it isn’t possible to specify arbitrary preference for a given prior, and a philosophical note at the end (related to the “where do the priors come from” debate).
I don’t follow the equation of preference and priors in the last paragraph.
What do you mean?
Could you demonstrate? I don’t understand.
I also don’t understand what you mean above.
What is usually called “prior” is represented by measure P in the post. Together with “shouldness” Q they constitute the recipe for computing preference over events, through expected utility.
If it’s not possible to choose prior more or less arbitrarily and then fill in the gaps using utility to get the correct preference, some priors are inherently incorrect for human preference, and finding the priors that admit completion to the correct preference with fitting utility requires knowledge about preference.
Regarding your second point; I’m not sure how it’s rational to choose your beliefs because of some subjective preference order.
Perhaps you could suggest a case where it makes sense to reason from preferences to “priors which make my preferences consistent”, because I’m also fuzzy on the details of when and how you propose to do so.
I see—by “prior” you mean “current estimate of probability”, because P was defined
I’ve been dealing lately with learning research where “prior” means how likely a given model of probability(outcome) is before any evidence, so maybe I was a little rigid.
In any case, I suggest you consistently use “probability” and drop “prior”.