On the other hand, a lot of the basic ideas of rationality need Bayes theorem to justify them. In particular, something can only be demonstrated to be a bias with respect to the Bayesian answer. Without understanding probability a lot of the advice in the sequences would seem like arbitrary rules handed down from above.
Of course, these arbitrary rules would actually work, but I’m not sure if that’s the best way to teach people.
On the other hand, a lot of the basic ideas of rationality need Bayes theorem to justify them. …
Not true. Theorem:Bayes is simply the result of more fundamental information-theoretic heuristics, which themselves would be capable, for the same reasons, of generating the same ideas of rationality—though it would probably require a longer inferential path, which is why Theorem:Bayes seems like it’s the grounding principle (rather than the best current operationalism of the true grounding principle).
The use of probabilities itself results from the same heuristics. These grounding heuristics form what I have called “correct reasoning”. “Correct reasoning” is the (meta-)heuristic that says a being should use precisely the heuristics that it would be forced to use after starting from an arbitrarily wrong belief set and encountering arbitrarily many instances of informative evidence.
(If one recognizes what heuristics one will move to on encounters with more evidence, one can move to them without waiting for the evidence to arise, on pain of excessively slow updating.)
In a conflict between correct reasoning and Theorem:Bayes, correct reasoning should take precedence.
Therefore, the humans here should say that they are “correct reasoners”, not Bayesians.
On the other hand, a lot of the basic ideas of rationality need Bayes theorem to justify them. In particular, something can only be demonstrated to be a bias with respect to the Bayesian answer. Without understanding probability a lot of the advice in the sequences would seem like arbitrary rules handed down from above.
Of course, these arbitrary rules would actually work, but I’m not sure if that’s the best way to teach people.
Not true. Theorem:Bayes is simply the result of more fundamental information-theoretic heuristics, which themselves would be capable, for the same reasons, of generating the same ideas of rationality—though it would probably require a longer inferential path, which is why Theorem:Bayes seems like it’s the grounding principle (rather than the best current operationalism of the true grounding principle).
The use of probabilities itself results from the same heuristics. These grounding heuristics form what I have called “correct reasoning”. “Correct reasoning” is the (meta-)heuristic that says a being should use precisely the heuristics that it would be forced to use after starting from an arbitrarily wrong belief set and encountering arbitrarily many instances of informative evidence.
(If one recognizes what heuristics one will move to on encounters with more evidence, one can move to them without waiting for the evidence to arise, on pain of excessively slow updating.)
In a conflict between correct reasoning and Theorem:Bayes, correct reasoning should take precedence.
Therefore, the humans here should say that they are “correct reasoners”, not Bayesians.