I would like to share some interesting discussion on a hidden assumption used in Cox’s Theorem (this is the result which states that what falls out of the desiderata is a probability measure).
First, some criticism of Cox’s Theorem—a paper by Joseph Y. Halpern published in the Journal of AI Research. Here he points out an assumption which is necessary to arrive at the associative functional equation:
F(x, F(y,z)) = F(F(x,y), z) for all x,y,z
This is (2.13) in PT:TLoS
Because this equation was derived by using the associativity of the conjunction operation A(BC) = (AB)C, there are restrictions on what values the plausibilities x, y, and z can take. If these restrictions were stringent enough that x,y and z could only take on finitely many values or if they were to miss an entire interval of values, then the proof would fall apart. There needs to be an additional assumption that the values they can take form a dense subset. Halpern argues that this assumption is unnatural and unreasonable since it disallows “notions of belief with only finitely many gradations.” For example, many AI projects have only finitely many propositions that are considered.
K. S. Van Horn’s article on Cox’s Theorem addresses this criticism directly and powerfully starting on page 9. He argues that the theory that is being proposed should be universal and so having holes in the set of plausibilities should be unacceptable.
Anyhow, I found it interesting if only because it makes explicit a hidden assumption in the proof.
I would like to share some interesting discussion on a hidden assumption used in Cox’s Theorem (this is the result which states that what falls out of the desiderata is a probability measure).
First, some criticism of Cox’s Theorem—a paper by Joseph Y. Halpern published in the Journal of AI Research. Here he points out an assumption which is necessary to arrive at the associative functional equation:
F(x, F(y,z)) = F(F(x,y), z) for all x,y,z
This is (2.13) in PT:TLoS
Because this equation was derived by using the associativity of the conjunction operation A(BC) = (AB)C, there are restrictions on what values the plausibilities x, y, and z can take. If these restrictions were stringent enough that x,y and z could only take on finitely many values or if they were to miss an entire interval of values, then the proof would fall apart. There needs to be an additional assumption that the values they can take form a dense subset. Halpern argues that this assumption is unnatural and unreasonable since it disallows “notions of belief with only finitely many gradations.” For example, many AI projects have only finitely many propositions that are considered.
K. S. Van Horn’s article on Cox’s Theorem addresses this criticism directly and powerfully starting on page 9. He argues that the theory that is being proposed should be universal and so having holes in the set of plausibilities should be unacceptable.
Anyhow, I found it interesting if only because it makes explicit a hidden assumption in the proof.