And you don’t often see the view that part of the brain wanting to do something, other wanting to do something else, a tug of war where the result is not even picked based on some reasoning fallacy but the one view simply overpowers the other.
It seems to me that LW discussions tend to focus on the model you ascribe to them because it’s a model of rational decision making. What you describe as one part of the brain wanting something and another another doesn’t sound like a rational decision. I mean especially the phrase “simply overpowers”. No doubt something like that actually happens, but why should it be relevant to the way we make rational decisions?
But minimization of such tug of war is important for being able to actually do any rational decision making in the real life. Where some part of brain gets priority because it’s this part of brain’s job to handle situations like that, and switches off rational process.
It’s not that we don’t know how to rationally approach choosing of best option, it’s that when electing a president or making conclusion on global warming or doing anything else that’s relevant, some group allegiance process kicks in and the rationality goes out like a candle in a hurricane strength wind.
edit: Also, haven’t we all in our lives did something against own best interest and society’s best interest, despite knowing very well how to reason correctly to avoid inflicting this self harm? The first task of a rationalist who knows he’s running on very glitchy hardware would be to try to understand how is his hardware glitching.
That’s a fair point, but I guess I don’t see how what you’re describing is therefore a new model of thinking. If thinking is serial while irrational, impulsive action is non-serial, then the non-serial model of psychology doesn’t come into conflict with the serial model of thinking. They could both be true.
Also, I sometimes feel like we should taboo the whole computer metaphor of thought. Hardware, software, glitching, etc.
I don’t believe that the rational thinking can be serial, that is my point. Consider an un-obvious solution to a problem—a good effective solution. There has to be a search in the vast solution space, to arrive at this solution. The search that human brain can only perform in parallel, by breaking apart the space and combining results. When this search ignores important part of solution space you may end up with solution that is grossly suboptimal or goes against what you believe is your goals. Meanwhile, the serial, deliberate thought—it is usually impossible to enumerate all possible solutions and compare them in any reasonable time—one can compare some of the solutions deliberately, but those are picked by a parallel process which one can’t introspect.
I did not mean to use words from CS in the metaphorical sense by the way. It’s just that computing technology is the field that has good words for those concepts.
It seems to me that LW discussions tend to focus on the model you ascribe to them because it’s a model of rational decision making. What you describe as one part of the brain wanting something and another another doesn’t sound like a rational decision. I mean especially the phrase “simply overpowers”. No doubt something like that actually happens, but why should it be relevant to the way we make rational decisions?
But minimization of such tug of war is important for being able to actually do any rational decision making in the real life. Where some part of brain gets priority because it’s this part of brain’s job to handle situations like that, and switches off rational process.
It’s not that we don’t know how to rationally approach choosing of best option, it’s that when electing a president or making conclusion on global warming or doing anything else that’s relevant, some group allegiance process kicks in and the rationality goes out like a candle in a hurricane strength wind.
edit: Also, haven’t we all in our lives did something against own best interest and society’s best interest, despite knowing very well how to reason correctly to avoid inflicting this self harm? The first task of a rationalist who knows he’s running on very glitchy hardware would be to try to understand how is his hardware glitching.
That’s a fair point, but I guess I don’t see how what you’re describing is therefore a new model of thinking. If thinking is serial while irrational, impulsive action is non-serial, then the non-serial model of psychology doesn’t come into conflict with the serial model of thinking. They could both be true.
Also, I sometimes feel like we should taboo the whole computer metaphor of thought. Hardware, software, glitching, etc.
I don’t believe that the rational thinking can be serial, that is my point. Consider an un-obvious solution to a problem—a good effective solution. There has to be a search in the vast solution space, to arrive at this solution. The search that human brain can only perform in parallel, by breaking apart the space and combining results. When this search ignores important part of solution space you may end up with solution that is grossly suboptimal or goes against what you believe is your goals. Meanwhile, the serial, deliberate thought—it is usually impossible to enumerate all possible solutions and compare them in any reasonable time—one can compare some of the solutions deliberately, but those are picked by a parallel process which one can’t introspect.
I did not mean to use words from CS in the metaphorical sense by the way. It’s just that computing technology is the field that has good words for those concepts.