So your suggestion is that we should de-compartmentalize, but in the reverse direction to that suggested by the OP, i.e. instead of propagating forward from ridiculous far beliefs, become better at back-propagating and deleting same? There is certainly merit in that suggestion if it can be accomplished. Any thoughts on how?
You don’t understand. Decompartmentalization doesn’t have a direction. You don’t go forwards towards a belief or backwards from a belief, or whatever. If your beliefs are decompartmentalized that means that the things you believe will impact your other beliefs reliably. This means that you don’t get to CHOOSE what you believe. If you think the singularity is all important and worth working for, it’s BECAUSE all of your beliefs align that way, not because you’ve forced your mind to align itself with that belief after having it.
I understand perfectly well how a hypothetical perfectly logical system would work (leaving aside issues of computational tractability etc.). But then, such a hypothetical perfectly logical system wouldn’t entertain such far mode beliefs in the first place. What I’m discussing is the human mind, and the failure modes it actually exhibits.
So your suggestion is that we should de-compartmentalize, but in the reverse direction to that suggested by the OP, i.e. instead of propagating forward from ridiculous far beliefs, become better at back-propagating and deleting same? There is certainly merit in that suggestion if it can be accomplished. Any thoughts on how?
You don’t understand. Decompartmentalization doesn’t have a direction. You don’t go forwards towards a belief or backwards from a belief, or whatever. If your beliefs are decompartmentalized that means that the things you believe will impact your other beliefs reliably. This means that you don’t get to CHOOSE what you believe. If you think the singularity is all important and worth working for, it’s BECAUSE all of your beliefs align that way, not because you’ve forced your mind to align itself with that belief after having it.
I understand perfectly well how a hypothetical perfectly logical system would work (leaving aside issues of computational tractability etc.). But then, such a hypothetical perfectly logical system wouldn’t entertain such far mode beliefs in the first place. What I’m discussing is the human mind, and the failure modes it actually exhibits.