Highly unlikely in a human. I don’t know, but I’d guess that self-checking is one subsystem. Lose that, and the plainest contradictions pass as uninteresting.
On the other hand, the engineering challenge catches my interest. Might there be any way to train other parts of your mind, parts that normally don’t do checking, to sync up if everything is working OK or intervene if the normal part is out of action? Get the right brain in on the act, perhaps. That might give you something like Dune “truth sense” but turned inward. It would certainly feel very different from normal reason.
Conscious-mind rationality is good—might unconscious-mind rationality be better? You could self-monitor even on autopilot.
There are many reasons to expect that the non-conscious part of the mind is largely arational, in the LW senses of rationality. My impression is that it seems to operate mostly on trained responses and associative connections/pattern matching, mediated by emotional responses. In practice this means it can often actually be more rational in certain ways than the conscious mind, because it seems to be better at collecting and correlating information, cf. people who have a non-rational aversion to certain foods for reasons they don’t consciously understand, then years later discover they’re actually allergic to it.
I expect the better approach would be to deliberately train the non-conscious mind to use associations and heuristics derived by the rational conscious mind, and I mean “train” in the sense of “training a dog”.
Any sort of high-level self-monitoring is probably beyond its capabilities, though perhaps recognizing warning signs and alerting the conscious mind would work. Some sort of “panic on unexpected input” type heuristic, I guess.
Highly unlikely in a human. I don’t know, but I’d guess that self-checking is one subsystem. Lose that, and the plainest contradictions pass as uninteresting.
On the other hand, the engineering challenge catches my interest. Might there be any way to train other parts of your mind, parts that normally don’t do checking, to sync up if everything is working OK or intervene if the normal part is out of action? Get the right brain in on the act, perhaps. That might give you something like Dune “truth sense” but turned inward. It would certainly feel very different from normal reason.
Conscious-mind rationality is good—might unconscious-mind rationality be better? You could self-monitor even on autopilot.
How many subsystems can be made rational?
There are many reasons to expect that the non-conscious part of the mind is largely arational, in the LW senses of rationality. My impression is that it seems to operate mostly on trained responses and associative connections/pattern matching, mediated by emotional responses. In practice this means it can often actually be more rational in certain ways than the conscious mind, because it seems to be better at collecting and correlating information, cf. people who have a non-rational aversion to certain foods for reasons they don’t consciously understand, then years later discover they’re actually allergic to it.
I expect the better approach would be to deliberately train the non-conscious mind to use associations and heuristics derived by the rational conscious mind, and I mean “train” in the sense of “training a dog”.
Any sort of high-level self-monitoring is probably beyond its capabilities, though perhaps recognizing warning signs and alerting the conscious mind would work. Some sort of “panic on unexpected input” type heuristic, I guess.