At a very rough guess, I think Bayesian thinking is helpful in 50-80% of nontrivial epistemic problems, more than p-adic analysis.
How might the law-type properties be indirectly relevant? Here are some cases:
In game theory it’s pretty common to assume that the players are Bayesian about certain properties of the environment (see Bayesian game). Some generality is lost by doing so (after all, reasoning about non-Bayesian players might be useful), but, due to the complete class theorems, less generality is lost than one might think, since (with some caveats) all policies that are not strictly dominated are Bayesian policies with respect to some prior.
Sometimes likelihood ratios for different theories with respect to some test can be computed or approximated, e.g. in physics. Bayes’ rule yields a relationship between the prior and posterior probability. Even in the absence of a way to determine what the right prior for the different theories is, if we can form a set of “plausible” priors (e.g. based on parsimony of the different theories and existing evidence), then Bayes’ rule then yields a set of “plausible” posteriors, which can be narrow even if the set of plausible priors was broad.
Bayes’ rule implies properties about belief updates such as conservation of expected evidence. If I expect my beliefs about some proposition to update in a particular direction in expectation, then I am expecting myself to violate Bayes’ rule, which implies (by CCT) that, if the set of decision problems I might face is sufficiently rich, I expect my beliefs to yield some strictly dominated decision rule. It is not clear what to do in this state of knowledge, but the fact that my decision rule is currently strictly dominated does imply that I am somewhat likely to make better decisions if I think about the structure of my beliefs, and where the inconsistency is coming from. (In effect, noticing violations of Bayes’ rule is a diagnostic tool similar to noticing violations of logical consistency)
I do think that some advocacy of Bayesianism has been overly ambitious, for the reasons stated in your post as well as those in this post. I think Jaynes in particular is overly ambitious in applications of Bayesianism, such as in recommending maximum-entropy models as an epistemological principle rather than as a useful tool. And I think this post by Eliezer (which you discussed) overreaches in a few ways. I still think that “Strong Bayesianism” as you defined it is a strawman, though there is some cluster in thoughtspace that could be called “Strong Bayesianism” that both of us would have disagreements with.
(as an aside, as far as I can tell, the entire Ap section of Jaynes’s Probability Theory: The Logic of Science is logically inconsistent)
At a very rough guess, I think Bayesian thinking is helpful in 50-80% of nontrivial epistemic problems, more than p-adic analysis.
How might the law-type properties be indirectly relevant? Here are some cases:
In game theory it’s pretty common to assume that the players are Bayesian about certain properties of the environment (see Bayesian game). Some generality is lost by doing so (after all, reasoning about non-Bayesian players might be useful), but, due to the complete class theorems, less generality is lost than one might think, since (with some caveats) all policies that are not strictly dominated are Bayesian policies with respect to some prior.
Sometimes likelihood ratios for different theories with respect to some test can be computed or approximated, e.g. in physics. Bayes’ rule yields a relationship between the prior and posterior probability. Even in the absence of a way to determine what the right prior for the different theories is, if we can form a set of “plausible” priors (e.g. based on parsimony of the different theories and existing evidence), then Bayes’ rule then yields a set of “plausible” posteriors, which can be narrow even if the set of plausible priors was broad.
Bayes’ rule implies properties about belief updates such as conservation of expected evidence. If I expect my beliefs about some proposition to update in a particular direction in expectation, then I am expecting myself to violate Bayes’ rule, which implies (by CCT) that, if the set of decision problems I might face is sufficiently rich, I expect my beliefs to yield some strictly dominated decision rule. It is not clear what to do in this state of knowledge, but the fact that my decision rule is currently strictly dominated does imply that I am somewhat likely to make better decisions if I think about the structure of my beliefs, and where the inconsistency is coming from. (In effect, noticing violations of Bayes’ rule is a diagnostic tool similar to noticing violations of logical consistency)
I do think that some advocacy of Bayesianism has been overly ambitious, for the reasons stated in your post as well as those in this post. I think Jaynes in particular is overly ambitious in applications of Bayesianism, such as in recommending maximum-entropy models as an epistemological principle rather than as a useful tool. And I think this post by Eliezer (which you discussed) overreaches in a few ways. I still think that “Strong Bayesianism” as you defined it is a strawman, though there is some cluster in thoughtspace that could be called “Strong Bayesianism” that both of us would have disagreements with.
(as an aside, as far as I can tell, the entire Ap section of Jaynes’s Probability Theory: The Logic of Science is logically inconsistent)