I think this is kind of funny considering that the second axiom of probability states that an elementary event has probability one. It’s just a simple way to define the system, like how the axioms of euclidean geometry are simpler if you have a point at infinity. It doesn’t necessarily mean anything. I just find it kind of funny.
the probability that some elementary event in the entire sample space will occur is 1
I believe that a part of the post’s point is that the entire sample space is hard to find in most real-life cases. From the post:
However, in the real world, when you roll a die, it doesn’t literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write “37” on one side.
EDIT: Another example, this time from the Martin Gardner’s excellent book, Mathematical Games :
The hotel’s cocktail lounge before the dinner hour was
noisy with prestidigitators. At the bar I ran into my old
friend “Bet a Nickel” Nick, a blackjack dealer from Las
Vegas who likes to keep up with the latest in card magic.
The nickname derives from his habit of perpetually making
five-cent bets on peculiar propositions. Everybody knows his
bets have “catches” to them, but who cares about a nickel?
It was worth five cents just to find out what he was up to.
“Any new bar bets, Nick?” I asked. “Particularly bets
with probability angles?”
Nick slapped a dime on the counter beside his glass of beer.
“If I hold this dime several inches above the top of the bar
and drop it, chances are one-half it falls heads, one-half it
falls tails, right ?”
“Right,” I said.
“Betcha a nickel,” said Nick, “it lands on its edge and stays
there.”
“O.K.,” I said.
Nick dunked the dime in his beer, placed it against the side
of his glass and let it go. It slid down the straight side,
landed on its edge and stayed on its edge, held to the glass
by the beer’s adhesion. I handed Nick a nickel. Everybody
laughed.
Nick tore a paper match out of a folder, marked one side
of the match with a pencil. “If I drop this match, chances
are fifty-fifty it falls marked side up, right?” I nodded. “Betcha a nickel,” he went on, “that it falls on its edge, like the
dime.”
“It’s a bet,” I said.
Nick dropped the match. But before doing so, he bent it
into the shape of a V. Of course it fell on its edge and I lost
another nickel.
Jaynes didn’t like Kolmogorov’s axioms, and I expect Eliezer would agree. I remember he mentioned somewhere in the sequences that he thought probability could be axiomatized without reference to probabilities of 0 or 1, but it wouldn’t have much practical use to do so.
Jaynes definitely believed in 0 and 1 probabilities. In Probability Theory: The Logic of Science, equation (2.71), he gives
P(B | A, (A implies B)) = 1
P(A | not B, (A implies B)) = 0
Remember that probabilities are relative to a state of information. If X is a state of information from which we can infer A via deductive logic, then P(A | X) = 1 necessarily. Some common cases of this are
A is a tautology,
we are doing some sort of case analysis and X represents one of the cases being considered, or
we are investigating the consequences of some hypothesis and X represents the hypothesis.
However, Eliezer’s fundamental point is correct when we turn to the states of information of rational beings and propositions that are not tautologies or theorems. If a person’s state of information is X, and P(A | X) = 1, then no amount of contrary evidence can dissuade that person of A. This does not sound like rational behavior, unless A is necessarily true (in the mathematical sense of being a tautology or theorem).
Jaynes definitely believed in 0 and 1 probabilities.
I did not say that he didn’t. I said that he didn’t like Kolmogorov’s axioms. You can also derive Bayes’ rule from Kolmogorov’s axioms; that doesn’t mean Jayes didn’t believe in Bayes’ rule.
I meant that he didn’t think they were the best way to describe probability. IIRC, he thought that they didn’t make it clear why the structure they described is the right way to handle uncertainty. He also may have said that they allow you to talk about certain objects that don’t really correspond to any epistemological concepts. You can find his criticism in one of the appendices to Probability Theory: the Logic of Science.
I think this is kind of funny considering that the second axiom of probability states that an elementary event has probability one. It’s just a simple way to define the system, like how the axioms of euclidean geometry are simpler if you have a point at infinity. It doesn’t necessarily mean anything. I just find it kind of funny.
I believe that a part of the post’s point is that the entire sample space is hard to find in most real-life cases. From the post:
EDIT: Another example, this time from the Martin Gardner’s excellent book, Mathematical Games :
Jaynes didn’t like Kolmogorov’s axioms, and I expect Eliezer would agree. I remember he mentioned somewhere in the sequences that he thought probability could be axiomatized without reference to probabilities of 0 or 1, but it wouldn’t have much practical use to do so.
Jaynes definitely believed in 0 and 1 probabilities. In Probability Theory: The Logic of Science, equation (2.71), he gives
P(B | A, (A implies B)) = 1
P(A | not B, (A implies B)) = 0
Remember that probabilities are relative to a state of information. If X is a state of information from which we can infer A via deductive logic, then P(A | X) = 1 necessarily. Some common cases of this are
A is a tautology,
we are doing some sort of case analysis and X represents one of the cases being considered, or
we are investigating the consequences of some hypothesis and X represents the hypothesis.
However, Eliezer’s fundamental point is correct when we turn to the states of information of rational beings and propositions that are not tautologies or theorems. If a person’s state of information is X, and P(A | X) = 1, then no amount of contrary evidence can dissuade that person of A. This does not sound like rational behavior, unless A is necessarily true (in the mathematical sense of being a tautology or theorem).
I did not say that he didn’t. I said that he didn’t like Kolmogorov’s axioms. You can also derive Bayes’ rule from Kolmogorov’s axioms; that doesn’t mean Jayes didn’t believe in Bayes’ rule.
I don’t know what one thing it means to not like axioms. So I’m not sure what you mean.
I meant that he didn’t think they were the best way to describe probability. IIRC, he thought that they didn’t make it clear why the structure they described is the right way to handle uncertainty. He also may have said that they allow you to talk about certain objects that don’t really correspond to any epistemological concepts. You can find his criticism in one of the appendices to Probability Theory: the Logic of Science.