This was shorthand for “I hold several contrarian beliefs about nutrition which seem to fit this pattern but don’t really belong in this comment”.
And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?
Sometimes. To make a good decision about whether to copy a contrarian position, you generally have to either be smarter (but perhaps less domain-knowledgeable) than typical experts, or else have a good estimate of some other contrarian’s intelligence and rationality and judge them to be high. (If you can’t do either of these things, then you have little hope of choosing correct contrarian beliefs.)
What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?
I mean Bayesian statistical methods, as opposed to Frequentist ones. (This isn’t a great example because there’s not actually such a clean divide, and the topic is tainted by prior use as a Less Wrong shibboleth. Luke’s original example—theology—illustrates the point pretty well).
If you can’t do either of these things, then you have little hope of choosing correct contrarian beliefs.
Notably, even if you can’t do either of these things, sometimes you can rationally reject the mainstream position if you can conclude that the incentive structure for the “typical experts” makes them hopelessly biased in a particular direction.
This shouldn’t lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.
This was shorthand for “I hold several contrarian beliefs about nutrition which seem to fit this pattern but don’t really belong in this comment”.
Sometimes. To make a good decision about whether to copy a contrarian position, you generally have to either be smarter (but perhaps less domain-knowledgeable) than typical experts, or else have a good estimate of some other contrarian’s intelligence and rationality and judge them to be high. (If you can’t do either of these things, then you have little hope of choosing correct contrarian beliefs.)
I mean Bayesian statistical methods, as opposed to Frequentist ones. (This isn’t a great example because there’s not actually such a clean divide, and the topic is tainted by prior use as a Less Wrong shibboleth. Luke’s original example—theology—illustrates the point pretty well).
Notably, even if you can’t do either of these things, sometimes you can rationally reject the mainstream position if you can conclude that the incentive structure for the “typical experts” makes them hopelessly biased in a particular direction.
This shouldn’t lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.