By examining our cognitive pieces (techniques, beliefs, etc.) one at a time in light of the others, we check not for adherence of our map to the territory but rather for the map’s self-consistency.
This would appear to be the best an algorithm can do from the inside. Self-consistent may not mean true, but it does mean it can’t find anything wrong with itself. (Of course, if your algorithm relies on observational inputs, there should be a theoretical set of observations which would break its self-consistency and thus force further reflection.)
By examining our cognitive pieces (techniques, beliefs, etc.) one at a time in light of the others, we check not for adherence of our map to the territory but rather for the map’s self-consistency.
This would appear to be the best an algorithm can do from the inside. Self-consistent may not mean true, but it does mean it can’t find anything wrong with itself. (Of course, if your algorithm relies on observational inputs, there should be a theoretical set of observations which would break its self-consistency and thus force further reflection.)