Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”? That seems...surprisingly plausible to me. It also seems like something one could test, with whatever it is scientists use to look at brain activity.
Hook a person up to a brain scanner. Give them true and false statements to evaluate. Also give them statements distinguished by, say, status of the speaker. Perhaps add Green/Blue coded statements if they’re of a political bent.
Then see if the same brain regions light up in each case.
Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”?
That’s not how System 1 works in my experience. System 1 is only concerned with modeling of the world and making predictions, particularly of the results of various actions one might make. Its model however tends to be extremely primitive. Also System 2 doesn’t have direct access to the model, only the predictions. Furthermore, as far as System 1 is concerned making statements, or even having System 2 believe something, are actions whose consequences are to be predicted.
Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”? That seems...surprisingly plausible to me. It also seems like something one could test, with whatever it is scientists use to look at brain activity.
Hook a person up to a brain scanner. Give them true and false statements to evaluate. Also give them statements distinguished by, say, status of the speaker. Perhaps add Green/Blue coded statements if they’re of a political bent.
Then see if the same brain regions light up in each case.
That’s not how System 1 works in my experience. System 1 is only concerned with modeling of the world and making predictions, particularly of the results of various actions one might make. Its model however tends to be extremely primitive. Also System 2 doesn’t have direct access to the model, only the predictions. Furthermore, as far as System 1 is concerned making statements, or even having System 2 believe something, are actions whose consequences are to be predicted.