I would strongly encourage folks to adopt the view that we are always “using Bayes’ theorem” when reasoning.
This is simply false. As I’m fond of pointing out, often the best judgment you can come up with is produced by entirely opaque processes in your head, whose internals are inaccessible to you no matter how hard you try to introspect on them. Pretending that you can somehow get around this problem and reduce all your reasoning to clear-cut Bayesianism is sheer wishful thinking.
Moreover, even when you are applying exact probabilistic reasoning in evaluating evidence, the numbers you work with often have a common-sense justification that you cannot reduce to Bayesian reasoning in any practically useful way. Knowledge of probability theory will let you avoid errors such as the prosecutor’s fallacy, but this leaves more fundamental underlying questions open. Are the experts who vouch for these forensic methods reliable or just quacks and pseudoscientists? Are the cops and forensic experts presenting real or doctoredevidence, and are they telling the truth or perjuring themselves in cooperation with the prosecution? You can be all happy and proud that you’ve applied the Bayes theorem correctly and avoided the common fallacies, and still your conclusion can be completely remote from reality because the numbers you’ve fed into the formula are a product of quackery, forgery, or perjury—and if you think you know a way to apply Bayesianism to detect these reliably, I would really like to hear it.
Given the context, I interpreted Komponisto’s comment as saying that to the extent that we reason correctly we are using Bayes’ theorem, not that we always reason correctly.
Even if the claim is worded like that, it implies (incorrectly) that correct reasoning should not involve steps based on opaque processes that we are unable to formulate explicitly in Bayesian terms. To take an example that’s especially relevant in this context, assessing people’s honesty, competence, and status is often largely a matter of intuitive judgment, whose internals are as opaque to your conscious introspection as the physics calculations that your brain performs when you’re throwing a ball. If you examine rigorously the justification for the numbers you feed into the Bayes theorem, it will inevitably involve some such intuitive judgment that you can’t justify in Bayesian terms. (You could do that if you had a way of reverse-engineering the relevant algorithms implemented by your brain, of course, but this is still impossible.)
Of course, you can define “reasoning” to refer only to those steps in reaching the conclusion that are performed by rigorous Bayesian inference, and use some other word for the rest. But then to avoid confusion, we should emphasize that reaching any reliable conclusion about the facts in a trial (or almost any other context) requires a whole lot of things other than just “reasoning.”
Even if the claim is worded like that, it implies (incorrectly) that correct reasoning should not involve steps based on opaque processes that we are unable to formulate explicitly in Bayesian terms.
You misunderstand. There was no normative implication intended about explicit formulation. My claim is much weaker than you think (but also abstract enough that it may be difficult to understand how weak it is). I simply assert that Bayesian updating is a mathematical definition of what “inference” means, in the abstract. This does not say anything about the details of how humans process information, and nor does it say anything about how mathematically explicit we “should” be about our reasoning in order for it to be valid. You concede everything you need to in order to agree with me when you write:
You could [justify intuitive judgements in Bayesian terms] if you had a way of reverse-engineering the relevant algorithms implemented by your brain,
In fact, this actually concedes more than necessary—because it could turn out that these algorithms are only approximately Bayesian, and my claim about Bayesianism as the ideal abstract standard would still hold (as indeed implied by the phrase “approximately Bayesian”).
Of course, this does in my view have the implication that it is appropriate for people who understand Bayesian language to use it when discussing their beliefs, especially in the context of a disagreement or other situation where one person’s doesn’t understand the other’s thought process. I suspect this is the real point of controversy here (cf. our previous arguments about using numerical probabilities).
Of course, this does in my view have the implication that it is appropriate for people who understand Bayesian language to use it when discussing their beliefs, especially in the context of a disagreement or other situation where one person’s doesn’t understand the other’s thought process. I suspect this is the real point of controversy here (cf. our previous arguments about using numerical probabilities).
Yes, the reason why I often bring up this point is the danger of spurious exactitude in situations like these. Clearly, if you are able to discuss the situation in Bayesian language while being well aware of the non-Bayesian loose ends involved, that’s great. The problem is that I often observe the tendency to pretend that these loose ends don’t exist. Moreover, the parts of reasoning that are opaque to introspection are typically the most problematic ones, and in most cases, their problems can’t be ameliorated by any formalism, but only on a messy case-by-case heuristic basis. The emphasis on Bayesian formalism detracts from these crucial problems.
If we actually knew how to reason correctly, we could program computers to do it. We reason correctly, better than computers, without understanding how we do it.
This is simply false. As I’m fond of pointing out, often the best judgment you can come up with is produced by entirely opaque processes in your head, whose internals are inaccessible to you no matter how hard you try to introspect on them. Pretending that you can somehow get around this problem and reduce all your reasoning to clear-cut Bayesianism is sheer wishful thinking.
Moreover, even when you are applying exact probabilistic reasoning in evaluating evidence, the numbers you work with often have a common-sense justification that you cannot reduce to Bayesian reasoning in any practically useful way. Knowledge of probability theory will let you avoid errors such as the prosecutor’s fallacy, but this leaves more fundamental underlying questions open. Are the experts who vouch for these forensic methods reliable or just quacks and pseudoscientists? Are the cops and forensic experts presenting real or doctored evidence, and are they telling the truth or perjuring themselves in cooperation with the prosecution? You can be all happy and proud that you’ve applied the Bayes theorem correctly and avoided the common fallacies, and still your conclusion can be completely remote from reality because the numbers you’ve fed into the formula are a product of quackery, forgery, or perjury—and if you think you know a way to apply Bayesianism to detect these reliably, I would really like to hear it.
Given the context, I interpreted Komponisto’s comment as saying that to the extent that we reason correctly we are using Bayes’ theorem, not that we always reason correctly.
Even if the claim is worded like that, it implies (incorrectly) that correct reasoning should not involve steps based on opaque processes that we are unable to formulate explicitly in Bayesian terms. To take an example that’s especially relevant in this context, assessing people’s honesty, competence, and status is often largely a matter of intuitive judgment, whose internals are as opaque to your conscious introspection as the physics calculations that your brain performs when you’re throwing a ball. If you examine rigorously the justification for the numbers you feed into the Bayes theorem, it will inevitably involve some such intuitive judgment that you can’t justify in Bayesian terms. (You could do that if you had a way of reverse-engineering the relevant algorithms implemented by your brain, of course, but this is still impossible.)
Of course, you can define “reasoning” to refer only to those steps in reaching the conclusion that are performed by rigorous Bayesian inference, and use some other word for the rest. But then to avoid confusion, we should emphasize that reaching any reliable conclusion about the facts in a trial (or almost any other context) requires a whole lot of things other than just “reasoning.”
You misunderstand. There was no normative implication intended about explicit formulation. My claim is much weaker than you think (but also abstract enough that it may be difficult to understand how weak it is). I simply assert that Bayesian updating is a mathematical definition of what “inference” means, in the abstract. This does not say anything about the details of how humans process information, and nor does it say anything about how mathematically explicit we “should” be about our reasoning in order for it to be valid. You concede everything you need to in order to agree with me when you write:
In fact, this actually concedes more than necessary—because it could turn out that these algorithms are only approximately Bayesian, and my claim about Bayesianism as the ideal abstract standard would still hold (as indeed implied by the phrase “approximately Bayesian”).
Of course, this does in my view have the implication that it is appropriate for people who understand Bayesian language to use it when discussing their beliefs, especially in the context of a disagreement or other situation where one person’s doesn’t understand the other’s thought process. I suspect this is the real point of controversy here (cf. our previous arguments about using numerical probabilities).
Yes, the reason why I often bring up this point is the danger of spurious exactitude in situations like these. Clearly, if you are able to discuss the situation in Bayesian language while being well aware of the non-Bayesian loose ends involved, that’s great. The problem is that I often observe the tendency to pretend that these loose ends don’t exist. Moreover, the parts of reasoning that are opaque to introspection are typically the most problematic ones, and in most cases, their problems can’t be ameliorated by any formalism, but only on a messy case-by-case heuristic basis. The emphasis on Bayesian formalism detracts from these crucial problems.
If we actually knew how to reason correctly, we could program computers to do it. We reason correctly, better than computers, without understanding how we do it.