Back when Pearl was arguing for the adoption of probability over logic in AI (for example, in the intro to his 88 book), he was talking about things like “rumor propagation” that logic calculus does not handle well, but probability calculus does. There is not much mention of B vs F in his defense, just properties of probability theory itself.
What Jaynes’ quote is really about is that logic calculus and probability calculus are not the same. Logic obeys locality, probability does not, etc. Bringing issues of epistemology into this is either a confusion of issues or a deliberate bait and switch.
I publish in a conference called “UAI” (Uncertainty in AI) a lot. There is a word “uncertainty” in the title, but I assure you, the community is far from uniformly Bayesian.
What Jaynes’ quote is really about is that logic calculus and probability calculus are not the same. Logic obeys locality, probability does not, etc.
Agreed, and I would go further and say Jaynes is making the claim that probability calculus is superior to logic calculus.
Bringing issues of epistemology into this is either a confusion of issues or a deliberate bait and switch.
I was confused- I flipped “interpret probabilities as beliefs” and “interpret beliefs as probabilities.”
The issue I still haven’t resolved is whether you’re objecting to calling the use of Bayes’ law Bayesian inference instead of probabilistic/statistical inference, and why. I think I see why you would want to, but my speculation is unclear.
I see where you’re coming from, and I agree that probabilistic inference is a more precise term that should be used instead. But I think my sense of shouldness there is much smaller than your sense of shouldness, and I can’t tell if that’s more tolerance for verbal sloppiness on my part or some sort of regret at narrowing the term Bayesian inference. I think I endorse the first but not the second explanation.
I just find LW’s often uninformed “Bayesianism” annoying, that’s all. It’s classic belief as attire—I am willing to bet having an opinion on these sorts of things make zero (0) practical difference in most people’s lives.
In fact, lots of things in the “LW cluster” are like this—MWI for instance.
edit: lest you think I am joking, here’s someone who calls the g-formula “an ad hoc frequentist device”:
which amounts to saying “get more data!” but which starts as follows:
“Bayesian epistemology and decision theory provide a rigorous foundation for dealing with mixed or ambiguous evidence, uncertainty, and risky decisions.”
which is a complete non-sequitur to the main point. I could keep going, but I hope the point is clear. There is an alarming lack of clue coupled with an alarming amount of “rah rah Bayes”. (Note: in case it is not obvious, I have no horse in the B vs F race at all, I am happy to write papers in either formalism. My point is not about B vs F at all.)
Back when Pearl was arguing for the adoption of probability over logic in AI (for example, in the intro to his 88 book), he was talking about things like “rumor propagation” that logic calculus does not handle well, but probability calculus does. There is not much mention of B vs F in his defense, just properties of probability theory itself.
What Jaynes’ quote is really about is that logic calculus and probability calculus are not the same. Logic obeys locality, probability does not, etc. Bringing issues of epistemology into this is either a confusion of issues or a deliberate bait and switch.
I publish in a conference called “UAI” (Uncertainty in AI) a lot. There is a word “uncertainty” in the title, but I assure you, the community is far from uniformly Bayesian.
Agreed, and I would go further and say Jaynes is making the claim that probability calculus is superior to logic calculus.
I was confused- I flipped “interpret probabilities as beliefs” and “interpret beliefs as probabilities.”
The issue I still haven’t resolved is whether you’re objecting to calling the use of Bayes’ law Bayesian inference instead of probabilistic/statistical inference, and why. I think I see why you would want to, but my speculation is unclear.
Yes I do (because Bayes’ law is a triviality—it’s like saying any use of fractions is frequentist).
I see where you’re coming from, and I agree that probabilistic inference is a more precise term that should be used instead. But I think my sense of shouldness there is much smaller than your sense of shouldness, and I can’t tell if that’s more tolerance for verbal sloppiness on my part or some sort of regret at narrowing the term Bayesian inference. I think I endorse the first but not the second explanation.
I just find LW’s often uninformed “Bayesianism” annoying, that’s all. It’s classic belief as attire—I am willing to bet having an opinion on these sorts of things make zero (0) practical difference in most people’s lives.
In fact, lots of things in the “LW cluster” are like this—MWI for instance.
edit: lest you think I am joking, here’s someone who calls the g-formula “an ad hoc frequentist device”:
http://lesswrong.com/lw/hwq/evidential_decision_theory_selection_bias_and/9crr
Here’s a post by Nyan Sandwich
http://lesswrong.com/lw/irj/crush_your_uncertainty/
which amounts to saying “get more data!” but which starts as follows:
“Bayesian epistemology and decision theory provide a rigorous foundation for dealing with mixed or ambiguous evidence, uncertainty, and risky decisions.”
which is a complete non-sequitur to the main point. I could keep going, but I hope the point is clear. There is an alarming lack of clue coupled with an alarming amount of “rah rah Bayes”. (Note: in case it is not obvious, I have no horse in the B vs F race at all, I am happy to write papers in either formalism. My point is not about B vs F at all.)
Do you also get annoyed when people say “prior” when they mean “probability estimate”? As in, “my priors have been updated”.