The position is new enough that newness alone explains why it’s not widespread (eg nutrition)
Nutrition what?
The nonstandard position is more complicated than the standard one, and exceeds the typical expert’s complexity limit (AGI)
And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?
The nonstandard position would destroy concepts in which experts have substantial sunk costs (Bayesianism)
What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?
This was shorthand for “I hold several contrarian beliefs about nutrition which seem to fit this pattern but don’t really belong in this comment”.
And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?
Sometimes. To make a good decision about whether to copy a contrarian position, you generally have to either be smarter (but perhaps less domain-knowledgeable) than typical experts, or else have a good estimate of some other contrarian’s intelligence and rationality and judge them to be high. (If you can’t do either of these things, then you have little hope of choosing correct contrarian beliefs.)
What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?
I mean Bayesian statistical methods, as opposed to Frequentist ones. (This isn’t a great example because there’s not actually such a clean divide, and the topic is tainted by prior use as a Less Wrong shibboleth. Luke’s original example—theology—illustrates the point pretty well).
If you can’t do either of these things, then you have little hope of choosing correct contrarian beliefs.
Notably, even if you can’t do either of these things, sometimes you can rationally reject the mainstream position if you can conclude that the incentive structure for the “typical experts” makes them hopelessly biased in a particular direction.
This shouldn’t lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.
I doubt that in the particular case of AGI, the nonstandard position’s complexity exceeds the typical AI expert’s complexity limit. I know a few AI experts, and they can handle extreme complexity. For that matter, I think AGI is well within the complexity limits of general computer science/mathematics/physics/science experts and at least some social science experts (e.g. Robin Hanson).
In fact, the number of such experts that have looked seriously at AGI, and come to different conclusions, strongly suggests to me that the jury is still out on this one. The answers, whatever they are, are not obvious or self-evident.
Repeat the same but s/AGI/Bayesianism. Bayesianism is routinely and quickly adopted within the community of mathematicians/scientists/software developers when it is useful and produces better answers. The conflict between Bayesianism and frequentism that is sometimes alluded to here is simply not an issue in every day practical work.
If relevant experts seem to disagree with a position, this is evidence against it. But this evidence is easily screened off, if:
The standard position is fully explained by a known bias
The position is new enough that newness alone explains why it’s not widespread (eg nutrition)
The nonstandard position is more complicated than the standard one, and exceeds the typical expert’s complexity limit (AGI)
The nonstandard position would destroy concepts in which experts have substantial sunk costs (Bayesianism)
Nutrition what?
And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?
What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?
This was shorthand for “I hold several contrarian beliefs about nutrition which seem to fit this pattern but don’t really belong in this comment”.
Sometimes. To make a good decision about whether to copy a contrarian position, you generally have to either be smarter (but perhaps less domain-knowledgeable) than typical experts, or else have a good estimate of some other contrarian’s intelligence and rationality and judge them to be high. (If you can’t do either of these things, then you have little hope of choosing correct contrarian beliefs.)
I mean Bayesian statistical methods, as opposed to Frequentist ones. (This isn’t a great example because there’s not actually such a clean divide, and the topic is tainted by prior use as a Less Wrong shibboleth. Luke’s original example—theology—illustrates the point pretty well).
Notably, even if you can’t do either of these things, sometimes you can rationally reject the mainstream position if you can conclude that the incentive structure for the “typical experts” makes them hopelessly biased in a particular direction.
This shouldn’t lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.
I doubt that in the particular case of AGI, the nonstandard position’s complexity exceeds the typical AI expert’s complexity limit. I know a few AI experts, and they can handle extreme complexity. For that matter, I think AGI is well within the complexity limits of general computer science/mathematics/physics/science experts and at least some social science experts (e.g. Robin Hanson).
In fact, the number of such experts that have looked seriously at AGI, and come to different conclusions, strongly suggests to me that the jury is still out on this one. The answers, whatever they are, are not obvious or self-evident.
Repeat the same but s/AGI/Bayesianism. Bayesianism is routinely and quickly adopted within the community of mathematicians/scientists/software developers when it is useful and produces better answers. The conflict between Bayesianism and frequentism that is sometimes alluded to here is simply not an issue in every day practical work.