I am almost convinced, honestly. I was leaning towards a frequentest view, but I’m realizing now—as pointed out here by a fellow community member—that some of my statements are similar if not identical to the conclusion here:
Jaynes certainly believed very firmly that probability was in the mind… But Jaynes also didn’t think that this implied a license to make up whatever priors you liked. There was only one correct prior distribution to use, given your state of partial information at the start of the problem.
is pretty similar to :
Bayesian reasoning is the field which tells us the optimal probability to assign to a proposition given the rest of our information, but that that is the optimal probability given the rest of our information is a fact about the world.
(When I say “our” there I mean each of us as individuals, not our collective knowledge. )
I’ll give my view; I think I agree with EY; I’ll be as short as I can. We have a standard deck of playing cards. It is a fact about them that 1⁄4 of them are hearts. Not just to me or someone else; it is a fact that the universe keeps track of: 1⁄4 of them are hearts. Two agents A and B are placing bets on the next suit to come out. They both know that 1⁄4 of the cards are hearts, and that is all that A knows, but B also knows that 8 out of 10 of the top 10 cards are hearts. So B bets that “The top card is a Heart.” and A bets that “~The top card is a heart.”. Before hand they argue, and B says “you know I think there’s an 80% chance that I’ll win.” B says “Are you crazy? Everyone knows that 75% of the cards in a deck aren’t hearts, the chances are 75% in my favor.”
Now, of all the possible states that satisfy A’s knowledge, which is the statement “75% of the cards in a deck aren’t hearts.”, exactly 75% of them satisfy “~The top card is a heart.” So it is no wonder that A ascribes a 75% probability in his favor. A’s beliefs constrain A’s expected experiences, but not to one line; they constrain the possible worlds A thinks it might be in. B’s knowledge is the same. Of the possible words that satisfy “8 out of 10 of the top 10 cards are hearts.” 80%
of them satisfy “The top card is a heart.” So, duh, B concludes that it has an 80% chance of winning. Using this simple setup it becomes clear to me that the more knowledge you have about the state of the deck, the more useful the probability you ascribe will be. This is because the more knowledge you have about the state of the deck, the more you constrict the space of possible worlds that you as an agent think you might be in. A third agent that knows every detail about the deck of course ascribes no non-1 or non-0 probability to any statement about the deck, since it know what possible world its in.
So then I would say that the probability a perfect reasoner P, gives to a statement S, is the fraction of possible worlds that satisfy S that also satisfy the rest of the statements that P holds. If P has no ignorance, then he has one possible world, and that fraction is always 1 or 0. And I would go as far as to say that this is a decent explanation of what a probability is. It’s a propositional attitude had by an agent; it has a value from 0 to 1, and represents the fraction of states that the agent thinks it might find it self in that satisfy the given proposition. This to me doesn’t seem to be inconsistent with the view expressed in this post, but I’m not sure of this.
My view does suggest that “P(a) = such and such” is a claim. Its just that its a claim about the possible worlds that an agent can consistently expect to find him/herself in given the rest of its beliefs. An agent can be wrong about a probability it ascribes. Suppose an agent R, which has the same knowledge as B, but insists that the probability of “The top card is a heart.” is 99%. Well, R is wrong, R is wrong about the fraction of worlds that satisfy R’s knowledge base that also satisfy “The top card is a heart.”
In conclusion, I will risk the hypothesis that:
“P(a|b) = x” is true if and only if x of the possible worlds that satisfy b, also satisfy a.
But of course, there are no possible worlds without uncertainty, and there is no uncertainty without the ignorance of an agent in a determined world.
I am almost convinced, honestly. I was leaning towards a frequentest view, but I’m realizing now—as pointed out here by a fellow community member—that some of my statements are similar if not identical to the conclusion here:
is pretty similar to :
(When I say “our” there I mean each of us as individuals, not our collective knowledge. )
I’ll give my view; I think I agree with EY; I’ll be as short as I can. We have a standard deck of playing cards. It is a fact about them that 1⁄4 of them are hearts. Not just to me or someone else; it is a fact that the universe keeps track of: 1⁄4 of them are hearts. Two agents A and B are placing bets on the next suit to come out. They both know that 1⁄4 of the cards are hearts, and that is all that A knows, but B also knows that 8 out of 10 of the top 10 cards are hearts. So B bets that “The top card is a Heart.” and A bets that “~The top card is a heart.”. Before hand they argue, and B says “you know I think there’s an 80% chance that I’ll win.” B says “Are you crazy? Everyone knows that 75% of the cards in a deck aren’t hearts, the chances are 75% in my favor.”
Now, of all the possible states that satisfy A’s knowledge, which is the statement “75% of the cards in a deck aren’t hearts.”, exactly 75% of them satisfy “~The top card is a heart.” So it is no wonder that A ascribes a 75% probability in his favor. A’s beliefs constrain A’s expected experiences, but not to one line; they constrain the possible worlds A thinks it might be in. B’s knowledge is the same. Of the possible words that satisfy “8 out of 10 of the top 10 cards are hearts.” 80% of them satisfy “The top card is a heart.” So, duh, B concludes that it has an 80% chance of winning. Using this simple setup it becomes clear to me that the more knowledge you have about the state of the deck, the more useful the probability you ascribe will be. This is because the more knowledge you have about the state of the deck, the more you constrict the space of possible worlds that you as an agent think you might be in. A third agent that knows every detail about the deck of course ascribes no non-1 or non-0 probability to any statement about the deck, since it know what possible world its in.
So then I would say that the probability a perfect reasoner P, gives to a statement S, is the fraction of possible worlds that satisfy S that also satisfy the rest of the statements that P holds. If P has no ignorance, then he has one possible world, and that fraction is always 1 or 0. And I would go as far as to say that this is a decent explanation of what a probability is. It’s a propositional attitude had by an agent; it has a value from 0 to 1, and represents the fraction of states that the agent thinks it might find it self in that satisfy the given proposition. This to me doesn’t seem to be inconsistent with the view expressed in this post, but I’m not sure of this.
My view does suggest that “P(a) = such and such” is a claim. Its just that its a claim about the possible worlds that an agent can consistently expect to find him/herself in given the rest of its beliefs. An agent can be wrong about a probability it ascribes. Suppose an agent R, which has the same knowledge as B, but insists that the probability of “The top card is a heart.” is 99%. Well, R is wrong, R is wrong about the fraction of worlds that satisfy R’s knowledge base that also satisfy “The top card is a heart.”
In conclusion, I will risk the hypothesis that: “P(a|b) = x” is true if and only if x of the possible worlds that satisfy b, also satisfy a. But of course, there are no possible worlds without uncertainty, and there is no uncertainty without the ignorance of an agent in a determined world.