Traditionally, probability was all about random events, things like rolling dice. The probability of a thing was what fraction of the time it would happen if you were somehow able to bring about the same random situation over and over again. Asking for (say) the probability that there is a god, or the probability that your wife has been unfaithful to you, was a type error like asking what colour happiness is.
But once you start thinking about conditional probability, you run into Bayes’ theorem, which points out a sort of symmetry between “the probability of A, given B” and “the probability of B, given A” and encourages you to ask what happens if you attach probabilities to everything. And it turns out that you can kinda do this, and that any situation where you’re reasoning about uncertain things can be treated using the methods of probability theory.
One way to apply that insight is to think about any given agent’s state of knowledge in probabilistic terms. It turns out that for certain kinds of mathematically idealized agents in certain kinds of mathematically idealized situations, the One True Way to represent their incomplete knowledge of the world is in terms of probabilities. This gets you “subjective Bayesianism”: you don’t ask what is the probability of a given thing, you ask what is my probability of a given thing; different agents will have different probabilities because they start off with different probabilities and/or see different evidence afterwards.
But you can have “objective Bayesianism” in a few senses. Firstly, given your prior probabilities and conditional probabilities and your subsequent observations the updates you do to get your posterior probabilities are dictated by the mathematics; you don’t get to choose those. So, while anyone can attach any probability to anything, some combinations of probability assignments are just wrong. If you say you’re 80% sure that some particular sort of god exists, and that you’re 90% sure that this god would do an otherwise-improbable thing X, and then X doesn’t happen, and you say you’re still 80% sure about that god’s existence—why, then, something is amiss.
Secondly, in some contexts the relevant probabilities are known. When you’re learning elementary probability theory in school, you get questions like “Joe rolls a fair die three times. The sum of the three rolls is 12. What’s the probability that the first roll yielded an odd number?”, and that question has a definite right answer and there’s nothing subjective about it. (This changes if you apply it to the real world and e.g. have concerns that the die may not be perfectly fair or that Joe may have misreported the sum of the rolls. Then your priors for those things affect your answer.)
Thirdly, you can ask not about your probabilities but those of some sort of hypothetical ideal agent. The resulting probability assignments will be objective to whatever extent your specification of that hypothetical agent is.
Fourthly (this is closely related to #2 and #3 above), in some cases you may consider that while you could choose your prior probabilities however you like, there’s only one reasonable way to choose them. (E.g., you know that in some entirely alien language “glorp” and “spung” are the words for magnetic north and south, and someone hands you a bar magnet. What’s the probability that this end is the glorp end? Gotta be 1⁄2, right?, because you have absolutely no information that would introduce an asymmetry between the two possible answers.)
Here’s how I see it.
Traditionally, probability was all about random events, things like rolling dice. The probability of a thing was what fraction of the time it would happen if you were somehow able to bring about the same random situation over and over again. Asking for (say) the probability that there is a god, or the probability that your wife has been unfaithful to you, was a type error like asking what colour happiness is.
But once you start thinking about conditional probability, you run into Bayes’ theorem, which points out a sort of symmetry between “the probability of A, given B” and “the probability of B, given A” and encourages you to ask what happens if you attach probabilities to everything. And it turns out that you can kinda do this, and that any situation where you’re reasoning about uncertain things can be treated using the methods of probability theory.
One way to apply that insight is to think about any given agent’s state of knowledge in probabilistic terms. It turns out that for certain kinds of mathematically idealized agents in certain kinds of mathematically idealized situations, the One True Way to represent their incomplete knowledge of the world is in terms of probabilities. This gets you “subjective Bayesianism”: you don’t ask what is the probability of a given thing, you ask what is my probability of a given thing; different agents will have different probabilities because they start off with different probabilities and/or see different evidence afterwards.
But you can have “objective Bayesianism” in a few senses. Firstly, given your prior probabilities and conditional probabilities and your subsequent observations the updates you do to get your posterior probabilities are dictated by the mathematics; you don’t get to choose those. So, while anyone can attach any probability to anything, some combinations of probability assignments are just wrong. If you say you’re 80% sure that some particular sort of god exists, and that you’re 90% sure that this god would do an otherwise-improbable thing X, and then X doesn’t happen, and you say you’re still 80% sure about that god’s existence—why, then, something is amiss.
Secondly, in some contexts the relevant probabilities are known. When you’re learning elementary probability theory in school, you get questions like “Joe rolls a fair die three times. The sum of the three rolls is 12. What’s the probability that the first roll yielded an odd number?”, and that question has a definite right answer and there’s nothing subjective about it. (This changes if you apply it to the real world and e.g. have concerns that the die may not be perfectly fair or that Joe may have misreported the sum of the rolls. Then your priors for those things affect your answer.)
Thirdly, you can ask not about your probabilities but those of some sort of hypothetical ideal agent. The resulting probability assignments will be objective to whatever extent your specification of that hypothetical agent is.
Fourthly (this is closely related to #2 and #3 above), in some cases you may consider that while you could choose your prior probabilities however you like, there’s only one reasonable way to choose them. (E.g., you know that in some entirely alien language “glorp” and “spung” are the words for magnetic north and south, and someone hands you a bar magnet. What’s the probability that this end is the glorp end? Gotta be 1⁄2, right?, because you have absolutely no information that would introduce an asymmetry between the two possible answers.)