Reflections on the Metastrategies Workshop
I’m a research lead at Timaeus and attended a workshop that @Raemon ran from Oct 4-6 (it was shortened from 4 days to 2.5 days to fit into a weekend). I had prior interest + experience in deliberate practice and enjoyed lots of Ray’s posts about it, so I was curious about the workshop on top of being in a position to actually make impactful plans.
This is a lightly edited write up that I initially made for myself and for the team at Timaeus about my experience and takeaways. It’s not super polished, but seems better to not clean up and publish than to not clean up and not publish.
What was the workshop about?
Ray is interested in metacognitive improvements. A rough definition of these are: skills or strategies you can learn that make you better able to understand your cognitive process and to influence it in ways that make you more effective as a person. Some examples might include:
Plan / decision making
Noticing
Prediction / calibration training
This workshop was specifically about being better at making plans. The terminal goal was to help people who work in deeply confusing and long-feedback loop areas (particularly x-risk reduction). The target audience of this workshop was people who are in a position to be making meaningful plans in their job (i.e., they determine most of their own work and/or the work of others). This workshop was probably most helpful for people who are somewhat familiar with how to make good plans
Why plans?
It’s tractable: you can plausibly learn at least one concrete thing in a weekend that will make you better at making plans.
It’s understandable: you can just follow the procedure of a concrete thing without needing to have inscrutable mental motions described to you.
It’s impactful: the absolute best plan you can follow (which is might not be the best plan you can make) can often be 10x better than your default plan, even if your current plan is good.
I think this sounds a little crazy until you think about it? But someone could be doing a plan that’s not even net positive. Someone could be working on entirely the wrong thing, and there could be a totally different research agenda or career that has 10x or 100x impact. Someone could just be chasing down the wrong medium-time horizon targets for their grand strategy that slows down the whole thing by massive OOM factors.
I personally have tons of examples where I retroactively saw the path to 10x better returns if I hadn’t missed something that I could have figured out if I’d thought way harder or more clearly to begin with.
These results compound: a plan might result in 10x returns because it unlocks resources for even better plans.
It’s unfortunately super hard to know ahead of time which plan is 10x better, but the long-tailed nature of plans means that if you become only a few % better at plan making, you might actually get really outsized returns from it over a long time horizon.
This basically turned out to be a bullet point outline of my theory of value for this type of work. FWIW I’m extremely deliberate-practice-pilled, but I think I would judge the value of this workshop to be O($1-10k), and my personal speculation is that it’s totally possible for this to have value on O($100k+) if one tried really hard to adopt the tools / mentality from the workshop (and had the leverage to use them, though I guess the sky’s the limit if you have enough leverage).
I think due to time constraints, this workshop chose to take the shadowmoth approach of “throw people in the deep end and make them learn to swim or not”. I don’t think this was a crazy choice, but I think there are more gentle ways to introduce ideas before shadowmothing someone?
I’ll speculate on this later.(I did not end up speculating on it)
Examples from the workshop
(Note: I would recommend not trying to solve or play around with these if you think there is any chance you’d like to try out some of the workshop exercises in a formal setting)
Here are some examples of exercises that we did (may not be in order):
[Redacted]
This was my favorite exercise, and the second time that I’d done it. I learned a few useful things from this and from listening to other people approach the problem, which I’ll elaborate on later.
You are given a Baba Is You puzzle to solve, but you are not allowed to interact with the puzzle in any way. This is what makes it particularly hard, because Baba Is You is a notoriously unintuitive and difficult puzzle game, and usually the process of solving puzzles involves fiddling with bits and collecting little pieces of evidence and finding out how rules interact through empirics. In this exercise, you don’t get any of that, you just get to Think Very Hard and try to make a plan that solves the puzzle on the first try (“one-shotting” it).
A motivating theme here is “pretend like empirics / experiments are really hard – every plan you make takes weeks or months of engineering time, but time spent planning up front is cheaper”
Unfortunately, this is kind of a hard mindset to embody, and it seems like a common takeaway is “you can’t think about anything successfully ever” and “empirics are incredibly OP” (I think that cheap empirics are incredibly OP, but that’s different)
Another motivating theme is “what does it feel like to feel surprised and what does it feel like to have gained an insight?”
This is also a good exercise I think (and one that I’ve done on my own before). The idea is that you get a puzzle from Thinking Physics and you just sit and try to solve it.
A motivating theme here is “what does it feel like to have a fuzzy idea of why something is true vs have a very crisp idea of why something is true, in the sense that the world must be deeply fucked somehow if the crisp idea is true, but the world might be totally fine if the fuzzy thing is not true?”
Another motivating theme is that of “noticing insights”, similar to with Baba Is You.
Some time was spent on freewriting exercises about plans that you actually have IRL, to try and immediately apply lessons from the workshop to IRL problems. This was a significant portion of the time of the workshop, but I won’t get into those much.
What lessons were useful to take away?
General skills that are OP (not all from the workshop, including my own takes in here)
(These are also skills that would make the workshop much more useful for you; having them doesn’t mean the workshop won’t help, it means you can use it to build on them)
Being well calibrated
Noticing (I feel like metacognition is ~impossible if you have zero noticing (most people don’t have literally zero noticing even if they’ve never tried to train it))
Being unafraid of looking stupid / not having an ego attached to your performance
This is a problem I have! I notice that I get emotionally stressed / anxious about not being competitive in solving the exercises compared to other participants. Noticing / acknowledging it seems to help a little.
Patience / taking breaks
Taking naps / caring for physiological health
Embodying skills as a lifestyle.
Being able to get into the mindset where it “counts” even in trivial situations. If you can’t live the values when it doesn’t matter, you won’t do it when it does matter
Specific skills from the workshop that are OP
Live logging (similar to having a research log): basically writing down all of the thoughts you’re having while thinking.
Related to white boxing (which is trying to un-black-box your thought process as much as possible). Live logging is trying to white box yourself, to yourself.
This is a clever ~shortcut to Noticing, for the purposes of this workshop (so the workshop doesn’t have Noticing as a prereq)
Metastrategic brainstorming: taking a step back and trying to think of radically different approaches to solving the problem
Ray thinks this is like, 40-60% of the skill of making better plans (generating them in the first place)
Basically, you notice that you’re stuck, and you see if there’s a different type of thing you can try or some intervention that you can make that will shortcut your process. Some examples of different categories of strategies you might brainstorm:
Think about the generating process of the problem
In places where the problem has contacted reality elsewhere, what does that imply about it?
Brute force + heuristics
Is the space of options small enough that you can just do this?
Babble
Maximal explore, minimal exploit, or something like that
Instead of building a wall, build a brick
What’s the smallest thing you could do that would make progress or tell you something new?
Combinatorially examine your assumptions + action / option space
Sometimes you will miss something that you were overlooking before (e.g. because your brain automatically pruned it as an option)
Identify OODA loops or make some if you can’t find any
Red team / construct an impossibility proof (of a strategy)
Common approach in math, but if you try to prove that your idea is impossible, you might notice something interesting about it
A related personal takeaway that I noticed here is that this can be a shortcut way to mitigate rabbit-holing on a solution (you pretend like the solution is impossible even if you don’t think it is – if you lived in that world, what would your next guess be?). I found that at one point I thought that a solution was the only clear solution, then the game told me I was wrong, then I immediately found the right solution once I had let go of the previous one. So maybe if you could just really make yourself believe that your favorite solution is wrong, on the level of receiving hard evidence without having actually received the evidence?
Take a nap / break / walk / eat something
One participant was skeptical of this, got stuck on something, tried it, and then almost immediately became unstuck.
Shortening feedback loops / “How could I have thought that faster?”
This is tied to live logging, since having a log of your thoughts helps a lot, but going through your actual process for solving a problem and looking for the earliest points where you could have made an insight (instead of when you actually made the insight) and what you could have thought to have made that insight sooner
This is like, reinforcement training / conditioning for your brain
Having 3 plans, 2 frames, and a crux
I’m not sure I’m super sold on this, but the idea is something like “can you try to generate multiple plans and multiple perspectives, and ideally a crux or two about what would make you choose one perspective or plan over another” to avoid rabbit-holing (avoiding rabbit-holing is a big underlying theme, I’m noticing)
Some other misc things that are small or known elsewhere or weren’t a big focus of the workshop
Quantified intuitions about “good” – can you figure out how to measure two plans in the same “units”? E.g. at one point Ray chose between “should I help people in the workshop during the next (open working) session or should I spend some time working on iterations and reflections for improving the workshop?”
An example of a shared unit here is “in expectation, the amount of (weighted?) improved researcher-hours on x-risk” or something like that
Figuring out your cruxes and ideally internally double-cruxing yourself (between two plans)
Framing things as ~OODA loops
I think this is a useful tool if you haven’t run into it before
We did an exercise where we tried to frame things that we do IRL as OODA loops and identify places where we aren’t using them but could be
What did I personally take away?
This is a bit more stream of consciousness / thinking out loud.
This phrase kept getting offhandedly repeated throughout the workshop when Ray was giving examples, and it’s stuck in my head. Something like, “I would just be in the middle of doing something and I would wake up and become sentient and look around me and be like, what am I doing?”
Something about the “wake up and become sentient” thing feels like a really major core of the workshop to me. I’ve spent a bunch of time (diffuse over many years) working on or thinking about deliberate practice and decision making and stuff, but I noticed that a lot of my patterns and intuitions here have kind of become a bit too subconscious, and I’ve forgotten that they’re a thing that I can just look at and continue to polish (I’ve somehow gotten too busy to remember that deliberate practice is a thing that I should still be doing everywhere).
For example, in the Baba Is You exercise, I ended up doing 3 different levels (across two sessions), and it wasn’t until I’d thought about it a bunch afterwards that I realized I had the same type of blind spot in all three cases (my brain is a bit too eager to prune things that are above some threshold of “I’m sure the world works this way”). I think this is a pretty generalizable pattern to notice, and it required “waking up” + live logging + reflecting about the process to notice it in the game setting.
Another big thing in the workshop for me was how important it is to
Actually do these as exercises and not just read about them
Actually do these as exercises and not just remember that I’d learned these lessons once upon a time (thinking of like, muscles that are not used or stretched in a long time become stiff or atrophy)
Actually think about calibration and practice noticing more often, and how much these things have helped me in the past and that I can just keep improving these things until I see diminishing returns
This thing wasn’t really part of the workshop (the workshop was not focused on execution of plans), but it is something the workshop reminded me of: I can in fact just spend time thinking about how to deliberately practice execution as a research lead. This is something that just seems obvious to do.
TLDR the thing that maybe had the biggest impact was the workshop as a catalyst for a meta-level “waking up and becoming sentient” about the fact that I can “wake up and become sentient” about object level things. (not that I have just literally been autopiloting, but there are degrees to getting out of your head about things / switching to manual control). Besides that, two object level patterns:
I still get emotionally agitated sometimes when I feel like I’m stupid because I can’t solve something
I prune a little too much and maybe explore a bit too little. I probably trust my subconscious process a little too much and should more often have some doubts about the end result (but also don’t want to overcorrect on this)
And some skills that I’m interested in practicing more and / or trying to use everywhere to internalize them:
Noticing
Metastrategic Brainstorming
Live logging (way more than I already do)
The latter two are the kind of thing where my theory of internalizing them is based on how I got myself to use ChatGPT originally, which was: shoehorn it into everything for a few weeks, then gradually prune use cases to what actually feels useful.
I also now have the idea of a “Deliberate Practice Monastery” tattooed on my brain and hear its beautiful siren call. I think there’s a real argument to be made that this work is extremely important and in its idealized, full-potential form, can make our best x-risk researchers significantly better.
I also like a rule of thumb that Ray mentioned during the workshop, which was “spend ~10% of your time on meta”.
Other misc thoughts
I found myself surprised at how not deliberate-practice-pilled a lot of people are. This type of practice and mindset feel like a natural companion to rationality, but it seems like very few people have spent significant time or attention on tuning these things.
It feels like the fruit is hanging so low here, like I genuinely believe there are just Ideas that are not impossible tacit knowledge that can be turned into Words that you can just Say to people and if they Listen they can just immediately be more competent. Perhaps this workshop didn’t yet have the perfect Words, but I think such incantations are definitely within reach.
It feels kind of insane to me that Ray is fighting to prove that this is useful enough to spend more effort on. I feel like this is just self evidently important or something, I don’t know if I’m missing something or everyone else is?
It is hard to come up with good exercises for this type of workshop, and I’m impressed at what Ray has done so far
I think case studies would be a really useful addition. My suggestion here was to include a few demonstrations (by Ray) of the proposed tools, but my wishlist would include examples in science or history where a plan making process was documented / legible and resulted in obviously better plans.
I still mostly think planning is essential and plans are meaningless or whatever the quote is.
Cheap empirics are OP as hell, you take things for granted until they’re missing.
All else being equal, velocity is OP. If you can do things 10% faster, you learn 10% faster too, everything compounds and is more than 10% better.
Concrete value in my day-to-day research
I’m redacting most of this section b/c it’s high context and also maybe private, I’m not sure and don’t want to think too hard about it. The thrust of this section was something like:
Looking back, if I had really, with my whole ass, internalized and practiced some of these lessons at the start of this year (maybe more than what the workshop alone would give? but definitely something achievable with the workshop as a catalyst), I could easily imagine saving a few weeks here or there, adding up to potentially months of speed up in my productivity (in the original document, there were four concrete examples).
In practice, I would guess that over the past few weeks, trying hard to apply things like metastrategic brainstorming and other forms of deliberate / purposeful practice have already saved time measured in days (in expectation, over the next few months).
Go to the next one!
Ray is hosting another workshop this weekend (Oct 25-27). Go do it. Actually doing the exercises is miles better than just reading about them (and going home afterwards and continuing to practice is miles better than that).
Some clarification of fine points:
Here’s a slightly more nuanced take:
I think, when I get/invent the “metacognition textbook from the future”, there will turn out to be ~5 meta strategies that are simple english prompts that work for most people, that accomplish like 50-85% of the value of “learn the whole-ass skill of metastrategic brainstorming”, for most people, who are solving problems that are only moderately hard.
I predict[1] if you are solving Really Quite Difficult problems (such as navigating a singularity that might be coming in 3-10 years. Or, probably founding most companies and doing most openended research), I think the full skill of “successfully generate a new metastrategy you haven’t thought of before, on the fly, in less-than-a-minute” will be pretty important, and probably 40-60% of the value. Although a large reason for that is that what you’ll actually need is [some other particular specific thing], and metastrategic brainstorming is the swiss-army-knife tool that is likely to help you find that thing.
This is me making some kind of bet, I don’t have empirics to back this up. If you disagree I’d be happy to operationalize a more specific bet.
How do you know this is actually useful? Or is it too early to tell yet?
It is a bit early to tell and seems hard to accurately measure, but I note some concrete examples at the end.
Concrete examples aside, in plan making it’s probably more accurate to call it purposeful practice than deliberate practice, but it seems super clear to me that in ~every place where you can deliberately practice, deliberate practice is just way better than whatever your default is of “do the thing a lot and passively gain experience”. It would be pretty surprising to me if that mostly failed to be true of purposeful practice for plan making or other metacognitive skills.
I agree it’s hard to accurately measure. All the more important to figure out some way to test if it’s working though. And there’s some reasons to think it won’t. Deliberate practice works when your practice is as close to real world situations as possible. The workshop mostly covered simple, constrained, clear feedback events. It isn’t obvious to me that planning problems in Baba is You are like useful planning problems IRL. So how do you know there’s transfer learning?
Some data I’d find convincing that Raemon is teaching you things which generalize. If the tools you learnt made you unstuck on some existing big problems you have, which you’ve been stuck on for a while.
The setup for the workshop is:
Day 1 deals with constrained Toy Exercises
Day 2 deals with thinking about the big, openended problems of your life (applying skills from Day 1)
Day 3 deals with thinking about your object-level day-to-day work. (applying skills from Day 1 and 2)
The general goal with Feedbackloop-first Rationality is to fractally generate feedback loops that keep you in touch with reality in as many ways as possible (while paying a reasonable overhead price, factored into the total of “spend ~10% of your time on meta”)
Some details from The Cognitive Bootcamp Agreement
My own experiences, after having experimented in a sporadic fashion for 6 years and dedicated Purposeful Practice for ~6 months:
First: I basically never feel stuck on impossible-looking problems. (This isn’t actually that much evidence because it’s very easy to be deluded about your approach being good, but I list it first because it’s the one you listed)
As of a couple weeks ago, a bunch of the skills feel like they have clicked together and finally demonstrated the promise of “more than the some of their parts.”
Multiple times per day, I successfully ask myself “Is what I’m doing steering me towards the most important part of The Problem? And, ideally, setting myself up to carve the hypothesis space by 50% as fast as possible?” and it is pretty clear:
...that yes there is something else I could be doing that was more important
...that I wouldn’t have done it by default without the training
...that various skills from the workshop were pretty important components of how I then go about redirecting my attention to the most important parts of the problem.
The most important general skills that come up a lot are asking:
“What are my goals?” (generate at least 3 goals)
“What is hard about this, and how can I deal with that?”
“Can I come up with a second or third plan?”
“What are my cruxes for whether to work on this particular approach?”
“Do those cruxes carve hypothesis space 50%? If not, can I shift my approach so they come closer to 50%, or will take less time to resolve an experiment?”
Things that I don’t yet know for sure if they’ll pay off but I do successfully do most days now:
Asking “How could I have thought that faster?”
(subjectively, I feel like good strategies come to me fairly automatically without much effort, in pretty much the way Tuning your Cognitive Strategies predicted when I started it 6 years ago, although it is hard to verify that from outside)
Observing where most of the time went in a given Lightcone team activity, and asking “what would be necessary to cut this from hours/days, down to ~5 minutes of thought and an automated LLM query?”
Observing places where other Lightcone employees feel cognitively stuck, and often coming with prompts for them that get them unstuck (they self-report as the prompts locally helping them come unstuck, we’ll see over time whether that seems to pay off in a major way)
(Notably, since getting into the groove of this, I’ve also gotten headaches from “overthinking”, and one of my current projects is to learn to more effectively process things in the background and come back to hard things when I’ve had more time to consolidate stuff. Also generally taking more rest in the middle of the day now that I have a clearer sense of my limits)
I am generally thinking of myself as having the goal of doubling Lightcone’s productivity in 12 months (via a combination of these techniques + LLM automation), in a way that should be pretty obvious to the outside world. I don’t actually know that I’ll succeed at that, but holding that as my intention feels very clarifying and useful. I would be interested in operationalizing bets about it.
(People at Lightcone vary in how bought into that goal. I am currently mostly thinking of it as a thing I’m personally aiming for, and getting people bought into it by demonstrating immediate value is one of the subgoals.
But, notably, six months ago I made a prediction: “6 months from now, in the past week, will I have suggested to a Lightcone employee that they make multiple plans and pick the best one?”, and I only gave it 10% because most of my brilliant-seeming ideas don’t actually pan out. But, when the prediction resolved last week, it resolved multiple times in the previous week)