[EDIT: An explanation is below that I should have provided in this comment; obviously when I made the comment I assumed people could read my mind; I apologize for my transparency bias]
Is it still valuable to reduce, albeit not eliminate, compartmentalization?
Compartmentalization is an enemy of rationalism. If we are going to say that rationalism is worthwhile, we must also say that reducing compartmentalization is worthwhile. But that argument only scratches the surface of the problem you eloquently pointed out.
Is there a fast method to rank how impactful a belief is to my belief system...
Mathematically, we have a mountain of beliefs that need processing with something better than brute force. We have to be able to quickly identify how impactful beliefs are to our belief system, and focus our rational efforts on those beliefs. (Otherwise we’re wasting our time processing only a tiny randomly chosen part of the mountain.)
Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)
Rationality, if it’s actually useful, should provide us with at least a small set of consistent and maximally impactful beliefs. We have not escaped compartmentalization of all our beliefs, but at least we have chosen the most impactful compartment within which we have consistency.
Does probablistic reasoning change how we answer these questions?
Finally, if we can’t perfectly process our mountain of beliefs, then at least we can imperfectly process that mountain. Hence the need for probabilistic reasoning.
To summarize, I want to be able to answer “yes” to all of these questions, to justify the endeavor of rationalism. The problem is like you, my answer for each is “I don’t know”. For this reason, I accept my rationalism is just faith, or perhaps less pejoratively, intuition (though we’re talking rationality here, right?).
Yes, that’s basically right.
As for those questions, I don’t know the answers either.
Rationalism is faith to you then?
[EDIT: An explanation is below that I should have provided in this comment; obviously when I made the comment I assumed people could read my mind; I apologize for my transparency bias]
I’m not sure what you mean...
Compartmentalization is an enemy of rationalism. If we are going to say that rationalism is worthwhile, we must also say that reducing compartmentalization is worthwhile. But that argument only scratches the surface of the problem you eloquently pointed out.
Mathematically, we have a mountain of beliefs that need processing with something better than brute force. We have to be able to quickly identify how impactful beliefs are to our belief system, and focus our rational efforts on those beliefs. (Otherwise we’re wasting our time processing only a tiny randomly chosen part of the mountain.)
Rationality, if it’s actually useful, should provide us with at least a small set of consistent and maximally impactful beliefs. We have not escaped compartmentalization of all our beliefs, but at least we have chosen the most impactful compartment within which we have consistency.
Finally, if we can’t perfectly process our mountain of beliefs, then at least we can imperfectly process that mountain. Hence the need for probabilistic reasoning.
To summarize, I want to be able to answer “yes” to all of these questions, to justify the endeavor of rationalism. The problem is like you, my answer for each is “I don’t know”. For this reason, I accept my rationalism is just faith, or perhaps less pejoratively, intuition (though we’re talking rationality here, right?).