In order to not compartmentalize, you need to test if your beliefs are all consistent with each other. If your beliefs are all statements in propositional logic, consistency checking becomes the Boolean Satisfiability Problem, which is NP-complete. If your beliefs are statements in predicate logic, then consistency checking becomes PSPACE-complete, which is even worse than NP-complete.
Not compartmentalizing isn’t just difficult, it’s basically impossible.
Reminds me of the opening paragraph of The Call of Cthulhu.
The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.
When I sobered up I started looking at all of the things that I believed in and decided to take everything out and only put the things back in me that I knew to be true… and then I would put it back in and then I would look at all the other things that I found were true and then I would match them and if one of them didn’t fit with the other then one of them had to be wrong. (source)
I agree, save that I think Academian’s proposal should be applied and “compartmentalizing” replaced with “clustering”. “Compartmentalization” is a more useful term when restricted to describing the failure mode.
1) having a large number of beliefs 2) the mathematically impossible challenge of validating those beliefs for consistency
Therefore:
3) It is impossible to not compartmentalize
This leads to a few questions:
Is it still valuable to reduce, albeit not eliminate, compartmentalization?
Is there a fast method to rank how impactful a belief is to my belief system, in order to predict whether an expensive consistency check is worthwhile?
Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)
Does probablistic reasoning change how we answer these questions?
Does probablistic reasoning change how we answer these questions?
Edwin Jaynes discusses “lattice” theories of probability where propositions are not universally comparable in appendix A of Probability Theory: The Logic of Science. Following Jaynes’s account, probability theory would correspond to a uniformly dense lattice, whereas a lattice with very sparse structure and a few dense regions would correspond to compartmentalized beliefs.
[EDIT: An explanation is below that I should have provided in this comment; obviously when I made the comment I assumed people could read my mind; I apologize for my transparency bias]
Is it still valuable to reduce, albeit not eliminate, compartmentalization?
Compartmentalization is an enemy of rationalism. If we are going to say that rationalism is worthwhile, we must also say that reducing compartmentalization is worthwhile. But that argument only scratches the surface of the problem you eloquently pointed out.
Is there a fast method to rank how impactful a belief is to my belief system...
Mathematically, we have a mountain of beliefs that need processing with something better than brute force. We have to be able to quickly identify how impactful beliefs are to our belief system, and focus our rational efforts on those beliefs. (Otherwise we’re wasting our time processing only a tiny randomly chosen part of the mountain.)
Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)
Rationality, if it’s actually useful, should provide us with at least a small set of consistent and maximally impactful beliefs. We have not escaped compartmentalization of all our beliefs, but at least we have chosen the most impactful compartment within which we have consistency.
Does probablistic reasoning change how we answer these questions?
Finally, if we can’t perfectly process our mountain of beliefs, then at least we can imperfectly process that mountain. Hence the need for probabilistic reasoning.
To summarize, I want to be able to answer “yes” to all of these questions, to justify the endeavor of rationalism. The problem is like you, my answer for each is “I don’t know”. For this reason, I accept my rationalism is just faith, or perhaps less pejoratively, intuition (though we’re talking rationality here, right?).
GEB has a section on this.
In order to not compartmentalize, you need to test if your beliefs are all consistent with each other. If your beliefs are all statements in propositional logic, consistency checking becomes the Boolean Satisfiability Problem, which is NP-complete. If your beliefs are statements in predicate logic, then consistency checking becomes PSPACE-complete, which is even worse than NP-complete.
Not compartmentalizing isn’t just difficult, it’s basically impossible.
Reminds me of the opening paragraph of The Call of Cthulhu.
Glenn Beck:
P.S. The trick is to use bubble sort.
It took me several seconds to guess that GEB refers to Godel, Escher, Bach.
Sorry about that!
I agree, save that I think Academian’s proposal should be applied and “compartmentalizing” replaced with “clustering”. “Compartmentalization” is a more useful term when restricted to describing the failure mode.
Could I express what you said as:
A person is in the predicament of:
1) having a large number of beliefs
2) the mathematically impossible challenge of validating those beliefs for consistency
Therefore:
3) It is impossible to not compartmentalize
This leads to a few questions:
Is it still valuable to reduce, albeit not eliminate, compartmentalization?
Is there a fast method to rank how impactful a belief is to my belief system, in order to predict whether an expensive consistency check is worthwhile?
Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)
Does probablistic reasoning change how we answer these questions?
Edwin Jaynes discusses “lattice” theories of probability where propositions are not universally comparable in appendix A of Probability Theory: The Logic of Science. Following Jaynes’s account, probability theory would correspond to a uniformly dense lattice, whereas a lattice with very sparse structure and a few dense regions would correspond to compartmentalized beliefs.
Yes, that’s basically right.
As for those questions, I don’t know the answers either.
Rationalism is faith to you then?
[EDIT: An explanation is below that I should have provided in this comment; obviously when I made the comment I assumed people could read my mind; I apologize for my transparency bias]
I’m not sure what you mean...
Compartmentalization is an enemy of rationalism. If we are going to say that rationalism is worthwhile, we must also say that reducing compartmentalization is worthwhile. But that argument only scratches the surface of the problem you eloquently pointed out.
Mathematically, we have a mountain of beliefs that need processing with something better than brute force. We have to be able to quickly identify how impactful beliefs are to our belief system, and focus our rational efforts on those beliefs. (Otherwise we’re wasting our time processing only a tiny randomly chosen part of the mountain.)
Rationality, if it’s actually useful, should provide us with at least a small set of consistent and maximally impactful beliefs. We have not escaped compartmentalization of all our beliefs, but at least we have chosen the most impactful compartment within which we have consistency.
Finally, if we can’t perfectly process our mountain of beliefs, then at least we can imperfectly process that mountain. Hence the need for probabilistic reasoning.
To summarize, I want to be able to answer “yes” to all of these questions, to justify the endeavor of rationalism. The problem is like you, my answer for each is “I don’t know”. For this reason, I accept my rationalism is just faith, or perhaps less pejoratively, intuition (though we’re talking rationality here, right?).