I’m not sure that ‘privileging the hypothesis’ deserves to be called a fallacy, though. It’s only a bad idea because of the biases that humans happen to have. It can lead to misconceptions for us primates, but it’s not a logical error in itself, is it?
It may not be a completely generic bias or fallacy, but it certainly can affect more than just human decision processes. There are a number of primitive systems that exhibit pathologies similar to what Eliezer is describing, speech recognition systems, for example, have a huge issue almost exactly isomorphic to this. Once some interpretation of a audio wave is a hypothesis, it is chosen in great excess to it’s real probability or confidence. This is the primary weakness of rule-based voice grammars, that their pre-determined possible interpretations lead to unexpected inputs being slotted into the nearest pre-existing hypothesis, rather than leading to a novel interpretation. The use of statistical grammars to try to pound interpretations to their ‘natural’ probabilistic initial weight is an attempt to avoid this issue.
This problem is also hidden in a great many AI decision systems within the ‘hypothesis generation’ system, or equivalent. However elegant the ranking and updating system, if your initial possible list is weak, you distort your whole decisions process.
This problem is also hidden in a great many AI decision systems within the ‘hypothesis generation’ system
At that point we’re dealing with a full-fledged artificial heuristic and bias—the generation system is the heuristic, and the bias is the overly limited collection of hypotheses it manages to formulate for explicit attention at a given point.
I’d reserve “fallacy” for motivated or egregious cases, the sort that humans try to get away with.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.
It’s most definitely a fallacy. It puts forth a conclusion without sufficient evidence to justify the conclusion. Just like an argument from authority or a gambler’s fallacy.
It’s not actually putting it forth as a conclusion though—it’s just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.
Good post.
I’m not sure that ‘privileging the hypothesis’ deserves to be called a fallacy, though. It’s only a bad idea because of the biases that humans happen to have. It can lead to misconceptions for us primates, but it’s not a logical error in itself, is it?
It may not be a completely generic bias or fallacy, but it certainly can affect more than just human decision processes. There are a number of primitive systems that exhibit pathologies similar to what Eliezer is describing, speech recognition systems, for example, have a huge issue almost exactly isomorphic to this. Once some interpretation of a audio wave is a hypothesis, it is chosen in great excess to it’s real probability or confidence. This is the primary weakness of rule-based voice grammars, that their pre-determined possible interpretations lead to unexpected inputs being slotted into the nearest pre-existing hypothesis, rather than leading to a novel interpretation. The use of statistical grammars to try to pound interpretations to their ‘natural’ probabilistic initial weight is an attempt to avoid this issue.
This problem is also hidden in a great many AI decision systems within the ‘hypothesis generation’ system, or equivalent. However elegant the ranking and updating system, if your initial possible list is weak, you distort your whole decisions process.
At that point we’re dealing with a full-fledged artificial heuristic and bias—the generation system is the heuristic, and the bias is the overly limited collection of hypotheses it manages to formulate for explicit attention at a given point.
I’d reserve “fallacy” for motivated or egregious cases, the sort that humans try to get away with.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.
It’s most definitely a fallacy. It puts forth a conclusion without sufficient evidence to justify the conclusion. Just like an argument from authority or a gambler’s fallacy.
It’s not actually putting it forth as a conclusion though—it’s just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.