Overall I’m still quite confused, so for my own benefit, I’ll try to rephrase the problem here in my own words:
Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you’ll finish your thesis next week if you just try hard enough.
But in general, simply taking out some mental stuff and inserting an equal amount of something else isn’t necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms—but oftenwithout any meta-level paradigm-shifting skills.
Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it’s not an adequate response for me to say “do you want to double crux about that?” for the same reason that reading bible verses isn’t adequate advice to a reluctant atheist tentatively hanging around church.
I don’t think all techniques are symmetric, or that there aren’t ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.
But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”
I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna’s example, finishing your PhD 4 months earlier). In fact, they’ve been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.
There can be many explanations of what’s going on, and I’m not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.
I can imagine inside views that might generate discomfort like this.
“If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I’m not one of those I lose my ability to contribute to the world and the things I care about won’t matter.”
“If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I’m lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don’t know what to do about a problem.”
I don’t know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.
(Note: this commented was heavily edited for more clarity following some feedback)
Overall I’m still quite confused, so for my own benefit, I’ll try to rephrase the problem here in my own words:
Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you’ll finish your thesis next week if you just try hard enough.
But in general, simply taking out some mental stuff and inserting an equal amount of something else isn’t necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms—but often without any meta-level paradigm-shifting skills.
Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it’s not an adequate response for me to say “do you want to double crux about that?” for the same reason that reading bible verses isn’t adequate advice to a reluctant atheist tentatively hanging around church.
I don’t think all techniques are symmetric, or that there aren’t ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.
But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”
I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna’s example, finishing your PhD 4 months earlier). In fact, they’ve been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.
There can be many explanations of what’s going on, and I’m not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.
I can imagine inside views that might generate discomfort like this.
“If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I’m not one of those I lose my ability to contribute to the world and the things I care about won’t matter.”
“If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I’m lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don’t know what to do about a problem.”
I don’t know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.
(Note: this commented was heavily edited for more clarity following some feedback)