I think you’ve hit on one of the core problems and some of my own dissatisfactions with how instrumental rationality is being treated here. (Quick plug is that I’ve tried to do some theorycrafting on what I think are the relevant issues here).
Specifically for “Why doesn’t LessWrong cover this stuff?”, I think that CFAR goes over some of these things.
For me personally, I think a big part of solving parts of the problem you describe (e.g. noticing that your brain is making up excuses to not exercise, rearchitecturing an app, etc. etc.) are things that feel like “shoulds” to me. Ideally, what I’d like to do is to have my gut feel like these things are worth doing in the same way that I “already know”. I think techniques to make this happen are where some of the CFAR content like Internal Double Crux can be relevant.
For all the other stuff (e.g. noticing when you should leave university if you don’t have reasons to stay, learning stuff you when you don’t have a solution to the forgetting curve, etc. etc.), I’m pattern-matching this to instances where paying more attention to your situation and asking the right questions about what you’re doing / why you’re there is important. To that end, I think things like regularly scheduling time to be attentive / update your plans is useful. Evidence-backed interventions you can do can be found in the psychological literature. (see here for planning and here for habits.)
I’m having some difficulty reconciling the above two types of issues with your thesis about not saying wrong things. I think you’re pointing to something similar to what I think about when your brain mixes up generative vs recognizing and you end up tricking yourself into believing something wrong (but tempting).
Would definitely love to see extensions to this idea + some more details on the things you listed in your 2nd to last paragraph on what’s worked for you!
I think you’ve hit on one of the core problems and some of my own dissatisfactions with how instrumental rationality is being treated here. (Quick plug is that I’ve tried to do some theorycrafting on what I think are the relevant issues here).
Specifically for “Why doesn’t LessWrong cover this stuff?”, I think that CFAR goes over some of these things.
For me personally, I think a big part of solving parts of the problem you describe (e.g. noticing that your brain is making up excuses to not exercise, rearchitecturing an app, etc. etc.) are things that feel like “shoulds” to me. Ideally, what I’d like to do is to have my gut feel like these things are worth doing in the same way that I “already know”. I think techniques to make this happen are where some of the CFAR content like Internal Double Crux can be relevant.
For all the other stuff (e.g. noticing when you should leave university if you don’t have reasons to stay, learning stuff you when you don’t have a solution to the forgetting curve, etc. etc.), I’m pattern-matching this to instances where paying more attention to your situation and asking the right questions about what you’re doing / why you’re there is important. To that end, I think things like regularly scheduling time to be attentive / update your plans is useful. Evidence-backed interventions you can do can be found in the psychological literature. (see here for planning and here for habits.)
I’m having some difficulty reconciling the above two types of issues with your thesis about not saying wrong things. I think you’re pointing to something similar to what I think about when your brain mixes up generative vs recognizing and you end up tricking yourself into believing something wrong (but tempting).
Would definitely love to see extensions to this idea + some more details on the things you listed in your 2nd to last paragraph on what’s worked for you!