Just riffing a bit on the same project you started :)
There’s integrity and accountability—integrity (Level 3) as following a certain decision theory and making it common knowledge that you do, such that others can reliably simulate you, and coordinate and make trades with you; and accountability as choosing who you want to do your individual holistic regulation (Level 4).
It’s certainly often helpful to quantify your beliefs, and to form an all-things-considered opinion as an ensemble model of all the things you might trust. But to restrict your trains-of-thought to always follow an all-things-considered view, never veering off into resonating with a single model or world-view, is, as you point out, not that great. However, spreading the meme of being able to zoom out to an all-things-considered, quantitative opinion when necessary, and engaging with that level regularly enough to build a track-record of being able to do that, seems like a core part of having a healthy Bayesian community, even if you actually use it quite infrequently compared to other modes of thinking (just like professional mathematicians riff on a post-rigorous level but can drop down to the rigorous level when need be). This is part of my current framing for the forecasting class I’m teaching at CFAR mainlines.
There’s also a long list of other CFAR techniques one could analyse.
Eliezer’s and Abram’s posts are interesting Level 1 interventions, but look at lot like improvements to your slow, deliberate, conscious thinking processes, perhaps eventually becoming ingrained in your S1. I’d compare that with TAPs, which seem to intervene quite directly at Level 2 (and probably with backchaining effects to Level 1): “what thoughts do I want to follow from other thoughts?” [1]
This also seems to me to be the core of what makes CBT therapy work, whereby you uncover unwanted trains (“Get invite to social event” → “Visualise public shame from making an embarrassing comment” → “Flinch away from invite”), and then intervene to change their trajectory.
This causes the question of whether there are any more direct interventions at Level 1. Interventions determining which thoughts, in and of themselves, are even desirable or not. I interpret Selective reporting and Lines of retreat as analysing such interventions. The former (a bit extrapolated) as noting that if there are some unitary thoughts we cannot think, regardless of whether we actuallybelieve them, this can cause large mistakes elsewhere in our belief system. The latter tries to tackle the problem when the blocker is motivational rather than social, by embedding the thoughts in conditionals and building a backup plan before considering whether it has to be used.
Then there’s goal factoring, closely related to separation of concerns. Don’t take actions which confusedly optimise for orthogonal goals, separate out your desires and optimize them separately. This probably has implications at Levels 1 through 4.
I could go on through the CFAR techniques and might at a later point, but that will do for now.
[1] This looks more like “epistemic TAPs”, or “internal TAPs”, which haven’t yet become a standard part of the mainline curriculum, where TAPs are often more external, and for things like “Deciding to take the stairs instead of the elevator as soon as I come into the office and look at them”.
Just riffing a bit on the same project you started :)
There’s integrity and accountability—integrity (Level 3) as following a certain decision theory and making it common knowledge that you do, such that others can reliably simulate you, and coordinate and make trades with you; and accountability as choosing who you want to do your individual holistic regulation (Level 4).
On another note, predictions and calibration training is often pitched as a kind of Level 1⁄2 intervention, but I’m more bullish on it as a Level 2 intervention with important Level 5 consequences.
It’s certainly often helpful to quantify your beliefs, and to form an all-things-considered opinion as an ensemble model of all the things you might trust. But to restrict your trains-of-thought to always follow an all-things-considered view, never veering off into resonating with a single model or world-view, is, as you point out, not that great. However, spreading the meme of being able to zoom out to an all-things-considered, quantitative opinion when necessary, and engaging with that level regularly enough to build a track-record of being able to do that, seems like a core part of having a healthy Bayesian community, even if you actually use it quite infrequently compared to other modes of thinking (just like professional mathematicians riff on a post-rigorous level but can drop down to the rigorous level when need be). This is part of my current framing for the forecasting class I’m teaching at CFAR mainlines.
There’s also a long list of other CFAR techniques one could analyse.
Eliezer’s and Abram’s posts are interesting Level 1 interventions, but look at lot like improvements to your slow, deliberate, conscious thinking processes, perhaps eventually becoming ingrained in your S1. I’d compare that with TAPs, which seem to intervene quite directly at Level 2 (and probably with backchaining effects to Level 1): “what thoughts do I want to follow from other thoughts?” [1]
This also seems to me to be the core of what makes CBT therapy work, whereby you uncover unwanted trains (“Get invite to social event” → “Visualise public shame from making an embarrassing comment” → “Flinch away from invite”), and then intervene to change their trajectory.
This causes the question of whether there are any more direct interventions at Level 1. Interventions determining which thoughts, in and of themselves, are even desirable or not. I interpret Selective reporting and Lines of retreat as analysing such interventions. The former (a bit extrapolated) as noting that if there are some unitary thoughts we cannot think, regardless of whether we actually believe them, this can cause large mistakes elsewhere in our belief system. The latter tries to tackle the problem when the blocker is motivational rather than social, by embedding the thoughts in conditionals and building a backup plan before considering whether it has to be used.
Then there’s goal factoring, closely related to separation of concerns. Don’t take actions which confusedly optimise for orthogonal goals, separate out your desires and optimize them separately. This probably has implications at Levels 1 through 4.
I could go on through the CFAR techniques and might at a later point, but that will do for now.
[1] This looks more like “epistemic TAPs”, or “internal TAPs”, which haven’t yet become a standard part of the mainline curriculum, where TAPs are often more external, and for things like “Deciding to take the stairs instead of the elevator as soon as I come into the office and look at them”.