[The following is a bit of a ramble. I’m making a couple of different points, and have differing levels of confidence in each one, so I’ve segregated them, to avoid everything getting mixed together]
Point 1:
I think there’s something that’s a little bit missing in how this question is asked. Meditation (to my understanding) is the sort of thing where one can do basically the same activity for many millions of reps, and over time, one will build capacity along some dimension. This kind of training is quantitative, you keep getting a little better at a particular activity. (Another activity that is like this is literal weightlifting.)
But I think that quantitative training is the exception rather than the rule.
I currently think that rationality (and most skills for that matter), is better thought of as qualitative: made up of a bunch of very distinct, discrete, micro-skills, or TAPs. Each micro-skill requires a specific training exercise, but after you’ve robustly ingrained the TAP, you reliably execute the relevant motion when confronted with the relevant stimuli, you’re done. You don’t get meaningfully stronger at applying each one. You maybe have to do the exercise again on a review schedule. But once you train the TAPs of (for instance) “notice that my thoughts are feeling fake → ask what do I actually believe?” or “feel frustrated → orient on what’s happening in this situation” [1] to reliability, it doesn’t really make sense to try and get better at that kind of TAP. You just move on to training the next one to reliability.
Point 2:
That said, in order to have this not spin off into worthlessness, there needs to be grounded in some particular real world “activity” that you do reliably enough to get feedback (in the same way that if you’re trying to get better at playing football, mostly you’re doing drills, doing deliberate practice on specific low level skills, but you also want those skills to come together in the activity of “playing football games.” This is closer to a “practice” like meditation.
But all of these (except maybe the first one?) are too narrow to be “rationality practice.” Unless you care about any of these in particular, I think the main thing is just trying ambitious things in the world. Your low level skills should “come together” in making more money, or getting better grades, or finding the sort of significant other that you want, or successfully executing on some project, or something.
I think this tension is at the core of why we are not really a community of practice. Every community of practice that I know about, be that the Parkour community, or the Circlers, or your local football club, has some specific activity, that one 1) could reasonably spend hours doing, 2) could enjoy doing for its own sake and, 3) can meaningfully get better at. We decided that our thing was “winning”, in general, so any particular activity will always be too narrow to capture what we care about.
This dynamic makes me sympathetic to this comment: I think if you try and have a community who’s central activity is “winning”, you’re going to find that “winning” is not the sort of thing that you can easily set up a practice regime at. But if you make your community about figuring out confusing questions, that is in fact something that you can do many reps of and get a lot better at.
[1] - Note that you have to train the true, sub-verbal, versions of these TAPs, not the words I’m using here.
Re:winning, I was recently thinking about how to explain what my own goals are for which rationality is a key tool. One catch phrase I like is: source code access.
Here’s the idea: imagine that our whole world is a video game, and we’re all characters in it. This can mean the physical world, the economic world, the social world, all of the above, etc. My goal is to be able to read and modify the source code of the game.
That formulation makes the role of epistemic rationality quite central: we’re all agents embedded in this universe, we already have access to the source code of economic/social/other systems, the problem is that we don’t understand the code well enough to know what changes will have what effects.
One way to think about it is that there are at least 3 kinds of “things” that one might want as part of their rationality practice:
1. Specific tools, schemas, frameworks for solving particular classes of problem. These are things like Goal Factoring or Double Crux. You will need to practice them (and maybe do practice on the individual sub skills, in order to have facility with them), but the main point of your tools is that you deploy them to solve a particular kind of problem.
2. Discrete training. Many TAPs, with associated practice exercises.
3. Continuous training. Single practices that you can just continue to churn on, for years, and which will continue to pay dividends.
[The following is a bit of a ramble. I’m making a couple of different points, and have differing levels of confidence in each one, so I’ve segregated them, to avoid everything getting mixed together]
Point 1:
I think there’s something that’s a little bit missing in how this question is asked. Meditation (to my understanding) is the sort of thing where one can do basically the same activity for many millions of reps, and over time, one will build capacity along some dimension. This kind of training is quantitative, you keep getting a little better at a particular activity. (Another activity that is like this is literal weightlifting.)
But I think that quantitative training is the exception rather than the rule.
I currently think that rationality (and most skills for that matter), is better thought of as qualitative: made up of a bunch of very distinct, discrete, micro-skills, or TAPs. Each micro-skill requires a specific training exercise, but after you’ve robustly ingrained the TAP, you reliably execute the relevant motion when confronted with the relevant stimuli, you’re done. You don’t get meaningfully stronger at applying each one. You maybe have to do the exercise again on a review schedule. But once you train the TAPs of (for instance) “notice that my thoughts are feeling fake → ask what do I actually believe?” or “feel frustrated → orient on what’s happening in this situation” [1] to reliability, it doesn’t really make sense to try and get better at that kind of TAP. You just move on to training the next one to reliability.
Point 2:
That said, in order to have this not spin off into worthlessness, there needs to be grounded in some particular real world “activity” that you do reliably enough to get feedback (in the same way that if you’re trying to get better at playing football, mostly you’re doing drills, doing deliberate practice on specific low level skills, but you also want those skills to come together in the activity of “playing football games.” This is closer to a “practice” like meditation.
Some “practices” in that vein, that come to mind:
Doing research and theoretical work
Doing math proofs
Doing MIRI like research
Effectively resolving disagreements
Doing actual science?
Debugging code?
Running a hedge fund
Writing and discussing Slate Star Codex articles
Point 3:
But all of these (except maybe the first one?) are too narrow to be “rationality practice.” Unless you care about any of these in particular, I think the main thing is just trying ambitious things in the world. Your low level skills should “come together” in making more money, or getting better grades, or finding the sort of significant other that you want, or successfully executing on some project, or something.
I think this tension is at the core of why we are not really a community of practice. Every community of practice that I know about, be that the Parkour community, or the Circlers, or your local football club, has some specific activity, that one 1) could reasonably spend hours doing, 2) could enjoy doing for its own sake and, 3) can meaningfully get better at. We decided that our thing was “winning”, in general, so any particular activity will always be too narrow to capture what we care about.
This dynamic makes me sympathetic to this comment: I think if you try and have a community who’s central activity is “winning”, you’re going to find that “winning” is not the sort of thing that you can easily set up a practice regime at. But if you make your community about figuring out confusing questions, that is in fact something that you can do many reps of and get a lot better at.
[1] - Note that you have to train the true, sub-verbal, versions of these TAPs, not the words I’m using here.
Re:winning, I was recently thinking about how to explain what my own goals are for which rationality is a key tool. One catch phrase I like is: source code access.
Here’s the idea: imagine that our whole world is a video game, and we’re all characters in it. This can mean the physical world, the economic world, the social world, all of the above, etc. My goal is to be able to read and modify the source code of the game.
That formulation makes the role of epistemic rationality quite central: we’re all agents embedded in this universe, we already have access to the source code of economic/social/other systems, the problem is that we don’t understand the code well enough to know what changes will have what effects.
I really like this framing, and it resonates a lot with how I personally think about my orientation to the world.
One way to think about it is that there are at least 3 kinds of “things” that one might want as part of their rationality practice:
1. Specific tools, schemas, frameworks for solving particular classes of problem. These are things like Goal Factoring or Double Crux. You will need to practice them (and maybe do practice on the individual sub skills, in order to have facility with them), but the main point of your tools is that you deploy them to solve a particular kind of problem.
2. Discrete training. Many TAPs, with associated practice exercises.
3. Continuous training. Single practices that you can just continue to churn on, for years, and which will continue to pay dividends.