This problem is also hidden in a great many AI decision systems within the ‘hypothesis generation’ system
At that point we’re dealing with a full-fledged artificial heuristic and bias—the generation system is the heuristic, and the bias is the overly limited collection of hypotheses it manages to formulate for explicit attention at a given point.
I’d reserve “fallacy” for motivated or egregious cases, the sort that humans try to get away with.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.
At that point we’re dealing with a full-fledged artificial heuristic and bias—the generation system is the heuristic, and the bias is the overly limited collection of hypotheses it manages to formulate for explicit attention at a given point.
I’d reserve “fallacy” for motivated or egregious cases, the sort that humans try to get away with.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.