All of the theories presented in this post seem to make the implausible assumption that somehow the brain acts like a hypothetical ideally rational individual and that impairment somehow breaks some aspect of this rationality.
However, there is a great deal of evidence the brain works nothing like this. In contrast, it has many specific modules that are responsible for certain kinds of thought or behavior. These modules are not weighed by some rational actor that sifts through them, they are the brain. When these modules come into conflict, e.g., in the standard word/color test where yellow is spelled in red, fairly simply conflict resolution methods are brought into play. When things go wrong in the brain, either an impairment in conflict resolution mechanisms or in the underlying modules themselves, things will go wonky in specific (not general) ways.
Speaking from personal experience, being in a psychotic/paranoid state simply makes certain things seem super salient to you. You can be quite well aware of the rational arguments against the conclusion you are worrying about but it’s just so salient that it ‘wins.’ In other words it also feels like there is just a failure in your ability to override certain misbehaving brain processes rather than some general inability to update appropriately. This is further supported by the fact that skizophrenics and others with delusions seem to be able to update largely appropriately in certain aspects, e.g., what answer is expected on a test, while still maintaining their delusional state.
This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don’t fit in… The “rational actor that sifts through [the modules]” could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn’t perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.
However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it’s still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc.
It’s too costly to be a perfect deceiver, so evolution hacked together a system where if it’s consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X.
In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you’re pushed toward status and social advantage, not truth. Maybe there aren’t even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.
All of the theories presented in this post seem to make the implausible assumption that somehow the brain acts like a hypothetical ideally rational individual and that impairment somehow breaks some aspect of this rationality.
However, there is a great deal of evidence the brain works nothing like this. In contrast, it has many specific modules that are responsible for certain kinds of thought or behavior. These modules are not weighed by some rational actor that sifts through them, they are the brain. When these modules come into conflict, e.g., in the standard word/color test where yellow is spelled in red, fairly simply conflict resolution methods are brought into play. When things go wrong in the brain, either an impairment in conflict resolution mechanisms or in the underlying modules themselves, things will go wonky in specific (not general) ways.
Speaking from personal experience, being in a psychotic/paranoid state simply makes certain things seem super salient to you. You can be quite well aware of the rational arguments against the conclusion you are worrying about but it’s just so salient that it ‘wins.’ In other words it also feels like there is just a failure in your ability to override certain misbehaving brain processes rather than some general inability to update appropriately. This is further supported by the fact that skizophrenics and others with delusions seem to be able to update largely appropriately in certain aspects, e.g., what answer is expected on a test, while still maintaining their delusional state.
This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don’t fit in… The “rational actor that sifts through [the modules]” could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn’t perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.
However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it’s still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc.
It’s too costly to be a perfect deceiver, so evolution hacked together a system where if it’s consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X.
In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you’re pushed toward status and social advantage, not truth. Maybe there aren’t even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.