Kant thought that space being Euclidean was a priori logically necessary, hence determinable from pure thought, hence true without need for empirical fact checking… and in the end this turned out to be wrong. Einstein had the last laugh (so far).
I have wondered now and again whether it might be that Cox’s Postulates are similar to Euclid’s Postulates and might have similar subtle exceptional discrepancies with physical reality in practice.
It is hard to form hypotheses here, partly for a lack of vivid theoretical alternatives. I know of two claims floating around in the literature that hint at substantive alternatives to Bayes.
One approach involves abandoning at least one of Aristotle’s three laws of thought (excluded middle, non-contradiction, and identity) and postulating, essentially, that reality itself might be ontologically ambiguous. If I had to pick one to drop, I think I’d drop excluded middle. Probably? Constructionist/intuitionist logic throws that one out often, and automated proof systems often leave it out by default. Under the keywords “fuzzy logic” there were attacks on these laws that directly reference Jaynes. So this is maybe one way to find a crack in the universe out of which we might wiggle.
The only other approach I know of in the literature is (for me) centrally based on later chapters in Scott Aaronson’s “Quantum Computing Since Democritus” (try clicking the link and then do ^f bayes) where, via hints and aspersions, Aaronson suggests that quantum mechanics can be thought of as Bayesian… except with complex numbers for the probabilities, and thus (maybe?) Bayesianism is essentially a potentially empirically false religion? Aaronson doesn’t just say this directly and at length. And his mere hints would be the place I left this summary… except that while hunting for evidence I ran across a link to what might be a larger and more direct attack on the physical reality of Bayesianism? (Looking at it: using axioms no less! With “the fifth axiom” having variations, just like Euclid?!)
So that arxiv paper by Lucien Hardy (that I missed earlier! (that was written in 2008?!?)) might just have risen to the top of my philosophy reading stack? Neat! <3
Maybe it is worth adding a third approach that I don’t think really counts… When the number of variables in a belief net goes up, the difficulty of simply performing mere inference becomes very hard to compute, with relatively general assumptions the algorithms ending up in NP-hard. This “doesn’t count as a real deep philosophically satisfying alternative to Bayes” for me because it seems like the practical upshot would just be that we need more CPU, and more causal isolation for the systems we care about (so their operation is more tractable to reason). Like… the practical impossibility of applying Bayes in general to large systems would almost help FIGHT the the other “possible true/deep alternatives” to Bayes, because it creates an alternative explanation for any subjective experience of sorta feeling like you had probabilities figured out, and then your probabilities came out very wrong. Like: maybe there were too many variables, and the NP-hardness just caught up with you? Would you really need to question the “laws of thought” themselves to justify your feeling of having been been in the physical world and then ended up “surprisingly surprised”? Seriously? Seriously?
Anyway. I was wondering if you, having recently looked at the pillars of pure thinking themselves, had thoughts about any cracks, or perhaps any even deeper foundations, that they might have :-)
By now, there are actually quite a few attempts to reconstruct quantum theory from more “reasonable” axioms besides Hardy’s. You can track the refrences in the paper above to find some more of them.
Thank you for your well-thought comment. One of the desiderata used to derive the original product rule is to use real numbers to represent the degrees of plausibility. So, it will be very interesting to see if the result still holds if we relax it to be a complex numbers.
Kant thought that space being Euclidean was a priori logically necessary, hence determinable from pure thought, hence true without need for empirical fact checking… and in the end this turned out to be wrong. Einstein had the last laugh (so far).
I have wondered now and again whether it might be that Cox’s Postulates are similar to Euclid’s Postulates and might have similar subtle exceptional discrepancies with physical reality in practice.
It is hard to form hypotheses here, partly for a lack of vivid theoretical alternatives. I know of two claims floating around in the literature that hint at substantive alternatives to Bayes.
One approach involves abandoning at least one of Aristotle’s three laws of thought (excluded middle, non-contradiction, and identity) and postulating, essentially, that reality itself might be ontologically ambiguous. If I had to pick one to drop, I think I’d drop excluded middle. Probably? Constructionist/intuitionist logic throws that one out often, and automated proof systems often leave it out by default. Under the keywords “fuzzy logic” there were attacks on these laws that directly reference Jaynes. So this is maybe one way to find a crack in the universe out of which we might wiggle.
The only other approach I know of in the literature is (for me) centrally based on later chapters in Scott Aaronson’s “Quantum Computing Since Democritus” (try clicking the link and then do ^f bayes) where, via hints and aspersions, Aaronson suggests that quantum mechanics can be thought of as Bayesian… except with complex numbers for the probabilities, and thus (maybe?) Bayesianism is essentially a potentially empirically false religion? Aaronson doesn’t just say this directly and at length. And his mere hints would be the place I left this summary… except that while hunting for evidence I ran across a link to what might be a larger and more direct attack on the physical reality of Bayesianism? (Looking at it: using axioms no less! With “the fifth axiom” having variations, just like Euclid?!)
So that arxiv paper by Lucien Hardy (that I missed earlier! (that was written in 2008?!?)) might just have risen to the top of my philosophy reading stack? Neat! <3
Maybe it is worth adding a third approach that I don’t think really counts… When the number of variables in a belief net goes up, the difficulty of simply performing mere inference becomes very hard to compute, with relatively general assumptions the algorithms ending up in NP-hard. This “doesn’t count as a real deep philosophically satisfying alternative to Bayes” for me because it seems like the practical upshot would just be that we need more CPU, and more causal isolation for the systems we care about (so their operation is more tractable to reason). Like… the practical impossibility of applying Bayes in general to large systems would almost help FIGHT the the other “possible true/deep alternatives” to Bayes, because it creates an alternative explanation for any subjective experience of sorta feeling like you had probabilities figured out, and then your probabilities came out very wrong. Like: maybe there were too many variables, and the NP-hardness just caught up with you? Would you really need to question the “laws of thought” themselves to justify your feeling of having been been in the physical world and then ended up “surprisingly surprised”? Seriously? Seriously?
Anyway. I was wondering if you, having recently looked at the pillars of pure thinking themselves, had thoughts about any cracks, or perhaps any even deeper foundations, that they might have :-)
You might be also be interested in “General Bayesian Theories and the Emergence of the Exclusivity Principle” by Chiribella et al. which claims that quantum theory is the most general theory which satisfies Bayesian consistency conditions.
By now, there are actually quite a few attempts to reconstruct quantum theory from more “reasonable” axioms besides Hardy’s. You can track the refrences in the paper above to find some more of them.
Thank you for your well-thought comment. One of the desiderata used to derive the original product rule is to use real numbers to represent the degrees of plausibility. So, it will be very interesting to see if the result still holds if we relax it to be a complex numbers.