Which posts seem to you like they have really important practical implications for how we ought to think and act, but don’t seem to come with enough information about how to use the skills, or to develop them, or even what exactly the relevant skills are?
Context: I’d like to devote many of my hours in the upcoming few months to naturalist study of topics that are critical to fully understanding the most important existing resources in rationality. I mainly have my eye on posts from Eliezer’s Sequences, but I’m also open to other essays, books, and even videos. I’ll be taking extensive notes, and turning them into companion pieces to complement the original essays. (See “Lies, Damn Lies, and Fabricated Options” and “Investigating Fabrication” for an example of what I’m proposing.)
[Question] Which rationality posts are begging for further practical development?
- 7 Aug 2023 19:13 UTC; 5 points) 's comment on Announcing the Clearer Thinking micro-grants program for 2023 by (
This seems high-impact. The impression I get from the current state of rationality is that it’s really decentralized, and it would be really easy and really helpful to amalgamate/distill a large portion of it into a more optimal combination of words.
(I’m interpreting this in a “turn things into exercises” lens which may not have been what you meant but happens to be what I’m On About this week)
I feel like I have recently hit the level where “actually deeply grok Bayes mathematically” is plausibly a good step for me.
I second Tuning your Cognitive Strategie”. I just ran some workshops that were originally planning to teach it explicitly, and I ended up being convinced that it was better to teach a somewhat more general meta-reflection exercise as the intro-version-of-it.
Looking over the /bestoflesswrong page, some things that stick out:
Something in A Sketch of Good Communication maybe should be operationalized as an exercise.
Babble has already been turned into an exercise but I think there’s room to make exercises that are optimized a bit more for… also teaching relevant other useful skills? It felt the like Babble Challenge series was sort of unnecessarily fun whimsical in a way that was, like, cool if that whimsy was intrinsically motivating. (I think connecting it with your
I would like more introspection/focusing-ish practica that are… tailored more for research? (or, oriented in directions other than… self-help? [my shoulder-Logan doesn’t like me labeling the area “self-help” and it doesn’t feel quite right to me either but it’s what I can quickly come up with])
Inadequate Equilibria could probably turn more exercise-ish although it’s also maybe similar to a lot of startup advice that already exists
I think a lot of alignment research stuff should ideally turn into something exercise-y.
Teaching how to Notice Frame Differences, and also generally how to operationalize frames better in various directions. (See Shared Frames Are Capital Investments in Coordination for one aspect, and Meta-rationality and frames). Various Framing Practicums.
Paper-Reading for Gears
Gears-Level Models are Capital Investments (I guess various stuff relating to forming gearsy models)
Integrity and accountability are core parts of rationality
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Yes Requires the Possibility of No
Rest Days vs Recovery Days
To listen well, get curious
The First Sample Gives the Most Information
I have a feeling some combination of Radical Probabilism and Infra-Bayesian and other “post-Bayes” epistemologies.
I’ve been teaching recently and babble was an obvious prerequisite to things like goal factoring and hamming problems. It’s pretty easy to teach but I think even pretty marginal improvements would pay big dividends because it’s upstream of so many things.
I wrote up “How To Think Of Things” for CFAR a while back. I probably wanna at least edit it some before making it a top level post, but I’m curious what you think of it.
I generally think that, for all math-mind integration stuff, it’s helpful for people to first look at Michael Smith’s recent twitter rant about how math education conditioned virtually all humans to hate math.
Not just because deconditioning those long years is a critical step for taking math and really feeling like working with it, but also because most of the people writing about math (e.g. textbooks, tutorials, etc) either were conditioned to hate math to some degree, or even if they thoroughly don’t, their own reading will still be largely written by people who were conditioned to hate math to some degree, because literally everyone everywhere in society went through the same long years of math from the education system.
This sentence cuts off.
yes—a perfect in-situ example of babble’s sibling, prune
On the unofficial 2022 LessWrong Census, I asked what people thought the most important lesson of rationality was. (The actual text of the question was “If you pick one lesson of rationality that everyone in the world would magically and suddenly understand, what lesson do you pick?”) The top answers were Conservation of Expected Evidence, Making Beliefs Pay Rent, and Belief In Belief. Conservation of Expected Evidence I personally recall as mindblowing when I first read it and reflected on it.
The Litany of Gendlin and the Litany of Tarski combine in my own head to be a really useful countercharm against ugh fields around finding out information about unpleasant things, though I’m not entirely sure that’s the direction The Meditation on Curiosity was supposed to take me. That plus the offhand line that if you know what you’ll think later you aught to think it now has sped up a lot of decisions that I otherwise would have agonized over in a manner that was, in hindsight, mostly wasted motion. Those aren’t really a single post though.
For a single post with obvious implications on how to act which is not sufficient on its own to start acting like that, Hero Licensing might be my favourite pick. I don’t know how to spark the skills involved in just trying things other than being a Mysterious Old Wizarding at them and hoping something catches but it seems important. The Importance of Saying Oops seems a lot more tractable as something to turn into an exercise though.
I know nothing about naturalism but cognitive tuning (beyond just cognitive strategies) seems like its begging to be expanded upon.
It definitely looks like something that, just on its own, could eventually morph into a written instruction manual for the human brain (i.e. one sufficiently advanced to enable people to save the world).
I beg forgiveness for linking to one of my own posts; it’s what I know most:
My version of Simulacra Levels lays out some distinctions which I think are important. I would love to see a set of practical exercises or reminder-rituals someone could do, that trains one to understand the distinctions and apply them effortlessly in real life, so that knowing what simulacra level you are speaking on comes as easily as knowing whether you are telling the truth or lying, or knowing whether you are saying something liberal or conservative. And then for advanced students, exercises that help you know what level you are thinking on.
Epistemic Cooperation/Legibility
Something to increase the hit rate of luck based medicine
Reality has a surprising amount of detail
The part in That Alien Message and the Beisutsukai shorts that are about independently regenerating known science from scratch. Or zetetic explanations, whichever feels more representative of the idea cluster.
In particular, how does one go about making observations that are useful for building models? How does one select the initial axioms in the first place?
I’m thirding Tuning Your Cognitive Strategies.
I also strongly recommend TsviBT’s please don’t throw your mind away, since prioritizing the state of being “on a roll” and genuinely having a good time looks like a really tractable way to boost brainpower to a large degree, even if it only works on a somewhat predetermined range of topics for each person.
How to deal with crucial considerations and deliberation ladders (link goes to a transcript + audio).
Regarsing your example , Vacalav Smil has a lot of interesting books about the energy economy or resources as a whole (natural).
Exposing yourself to his ideas might open up some correlates or advise a heuristic you hadn’t thought of in that context. “The energy economy of rationalist based thought process” or something like that.
I think this is a great goal, and I’m looking forward to what you put together!
This may be a bit different than the sort of thing you’re asking about, but I’d love to see more development/thought around topics related to https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline .
Rationality is certainly a skill, and something better / more concise exposition on rationality itself can help people develop. But once you learn to think right, what are the some of the most salient object-level ideas that come next? How do we better realize values in the real world, and make make use of / propagate these better ways of thinking? Why is this so hard, and what are strategies to make it easier?
SSC/AXC is a great example of better exploring object-level ideas, and I’d love to see more of that type of work pulled back into the community.