It is a bit early to tell and seems hard to accurately measure, but I note some concrete examples at the end.
Concrete examples aside, in plan making it’s probably more accurate to call it purposeful practice than deliberate practice, but it seems super clear to me that in ~every place where you can deliberately practice, deliberate practice is just way better than whatever your default is of “do the thing a lot and passively gain experience”. It would be pretty surprising to me if that mostly failed to be true of purposeful practice for plan making or other metacognitive skills.
I agree it’s hard to accurately measure. All the more important to figure out some way to test if it’s working though. And there’s some reasons to think it won’t. Deliberate practice works when your practice is as close to real world situations as possible. The workshop mostly covered simple, constrained, clear feedback events. It isn’t obvious to me that planning problems in Baba is You are like useful planning problems IRL. So how do you know there’s transfer learning?
Some data I’d find convincing that Raemon is teaching you things which generalize. If the tools you learnt made you unstuck on some existing big problems you have, which you’ve been stuck on for a while.
Day 1 deals with constrained Toy Exercises Day 2 deals with thinking about the big, openended problems of your life (applying skills from Day 1) Day 3 deals with thinking about your object-level day-to-day work. (applying skills from Day 1 and 2)
The general goal with Feedbackloop-first Rationality is to fractally generate feedback loops that keep you in touch with reality in as many ways as possible (while paying a reasonable overhead price, factored into the total of “spend ~10% of your time on meta”)
I don’t have perfect feedback-loops to tell you if this workshop is working for you. So, there are four 7 different feedback-loop types, with different tradeoffs:
Predictions
Guess whether a given strategy will pay off in a concrete way, then see if you were right.
Toy Exercises.
They only vaguely resemble your real problems, but you’ll know for sure whether you got the right answer in two hours.
Big picture planning
You’ll generate at least one new plan.
You won’t really know if it’s good, but a) you’ll have intuitions about whether it’s more or less promising than your previous plan, which are at least some information.
you’ll make predictions about whether it’ll seem worth having thought about in a year.
throughout the planning process, you’ll look for minor opportunities to make a prediction about whether how you’ll feel about your planning process, and try to compare how it feels compared to previous planmaking you’ve done
Object-level work, in 1-hour blocks
Spend a few timeblocks doing object level work on your second likeliest plan. Each hour, you’ll make conscious choices about how to spend your time and attention. And then, reflect on whether that seemed useful. (in addition to crosstraining your skills on the practical object-level, this will help make your second-likeliest plan feel more real)
Nearterm concrete predictions.Again, you can make concrete predictions about how an hour of object-level work will go, and whether pursuing a new strategy will seem to pay off in an intuitive sense.
Workshop retention
Make predictions about whether you’ll be using various skills from the workshop, 6 months from now.
Iterate on your strategies for retaining things (immediately) to see if you can improve your prediction about how much you’ll retain.
6 months from now, see if you’re still using workshop skills, or things clearly descended from them, and see if that feels useful
Post Workshop Predictions
Once you return to your day job, start making predictions about whether a given new strategy will pay off in a particular instance, and develop a sense of when they do and don’t actually help.
[New at the next workshop, not at the one George was at] ”How many metastrategies are people generating, which go on to help other people at the workshop.”
Next time I’m trying a big whiteboard of meta-strategies, where every time someone generates a new strategy which they strongly believe helped them solve a problem, they write it on the whiteboard and put their initials. If other people use that strategy and helps them, the other people also put their initials on it. The person who’s strategies go on to help the most people gets a prize.
My own experiences, after having experimented in a sporadic fashion for 6 years and dedicated Purposeful Practice for ~6 months:
First: I basically never feel stuck on impossible-looking problems. (This isn’t actually that much evidence because it’s very easy to be deluded about your approach being good, but I list it first because it’s the one you listed)
As of a couple weeks ago, a bunch of the skills feel like they have clicked together and finally demonstrated the promise of “more than the some of their parts.”
Multiple times per day, I successfully ask myself “Is what I’m doing steering me towards the most important part of The Problem? And, ideally, setting myself up to carve the hypothesis space by 50% as fast as possible?” and it is pretty clear:
...that yes there is something else I could be doing that was more important
...that I wouldn’t have done it by default without the training
...that various skills from the workshop were pretty important components of how I then go about redirecting my attention to the most important parts of the problem.
The most important general skills that come up a lot are asking:
“What are my goals?” (generate at least 3 goals)
“What is hard about this, and how can I deal with that?”
“Can I come up with a second or third plan?”
“What are my cruxes for whether to work on this particular approach?”
“Do those cruxes carve hypothesis space 50%? If not, can I shift my approach so they come closer to 50%, or will take less time to resolve an experiment?”
Things that I don’t yet know for sure if they’ll pay off but I do successfully do most days now:
(subjectively, I feel like good strategies come to me fairly automatically without much effort, in pretty much the way Tuning your Cognitive Strategies predicted when I started it 6 years ago, although it is hard to verify that from outside)
Observing where most of the time went in a given Lightcone team activity, and asking “what would be necessary to cut this from hours/days, down to ~5 minutes of thought and an automated LLM query?”
Observing places where other Lightcone employees feel cognitively stuck, and often coming with prompts for them that get them unstuck (they self-report as the prompts locally helping them come unstuck, we’ll see over time whether that seems to pay off in a major way)
(Notably, since getting into the groove of this, I’ve also gotten headaches from “overthinking”, and one of my current projects is to learn to more effectively process things in the background and come back to hard things when I’ve had more time to consolidate stuff. Also generally taking more rest in the middle of the day now that I have a clearer sense of my limits)
I am generally thinking of myself as having the goal of doubling Lightcone’s productivity in 12 months (via a combination of these techniques + LLM automation), in a way that should be pretty obvious to the outside world. I don’t actually know that I’ll succeed at that, but holding that as my intention feels very clarifying and useful. I would be interested in operationalizing bets about it.
(People at Lightcone vary in how bought into that goal. I am currently mostly thinking of it as a thing I’m personally aiming for, and getting people bought into it by demonstrating immediate value is one of the subgoals.
But, notably, six months ago I made a prediction: “6 months from now, in the past week, will I have suggested to a Lightcone employee that they make multiple plans and pick the best one?”, and I only gave it 10% because most of my brilliant-seeming ideas don’t actually pan out. But, when the prediction resolved last week, it resolved multiple times in the previous week)
How do you know this is actually useful? Or is it too early to tell yet?
It is a bit early to tell and seems hard to accurately measure, but I note some concrete examples at the end.
Concrete examples aside, in plan making it’s probably more accurate to call it purposeful practice than deliberate practice, but it seems super clear to me that in ~every place where you can deliberately practice, deliberate practice is just way better than whatever your default is of “do the thing a lot and passively gain experience”. It would be pretty surprising to me if that mostly failed to be true of purposeful practice for plan making or other metacognitive skills.
I agree it’s hard to accurately measure. All the more important to figure out some way to test if it’s working though. And there’s some reasons to think it won’t. Deliberate practice works when your practice is as close to real world situations as possible. The workshop mostly covered simple, constrained, clear feedback events. It isn’t obvious to me that planning problems in Baba is You are like useful planning problems IRL. So how do you know there’s transfer learning?
Some data I’d find convincing that Raemon is teaching you things which generalize. If the tools you learnt made you unstuck on some existing big problems you have, which you’ve been stuck on for a while.
The setup for the workshop is:
Day 1 deals with constrained Toy Exercises
Day 2 deals with thinking about the big, openended problems of your life (applying skills from Day 1)
Day 3 deals with thinking about your object-level day-to-day work. (applying skills from Day 1 and 2)
The general goal with Feedbackloop-first Rationality is to fractally generate feedback loops that keep you in touch with reality in as many ways as possible (while paying a reasonable overhead price, factored into the total of “spend ~10% of your time on meta”)
Some details from The Cognitive Bootcamp Agreement
My own experiences, after having experimented in a sporadic fashion for 6 years and dedicated Purposeful Practice for ~6 months:
First: I basically never feel stuck on impossible-looking problems. (This isn’t actually that much evidence because it’s very easy to be deluded about your approach being good, but I list it first because it’s the one you listed)
As of a couple weeks ago, a bunch of the skills feel like they have clicked together and finally demonstrated the promise of “more than the some of their parts.”
Multiple times per day, I successfully ask myself “Is what I’m doing steering me towards the most important part of The Problem? And, ideally, setting myself up to carve the hypothesis space by 50% as fast as possible?” and it is pretty clear:
...that yes there is something else I could be doing that was more important
...that I wouldn’t have done it by default without the training
...that various skills from the workshop were pretty important components of how I then go about redirecting my attention to the most important parts of the problem.
The most important general skills that come up a lot are asking:
“What are my goals?” (generate at least 3 goals)
“What is hard about this, and how can I deal with that?”
“Can I come up with a second or third plan?”
“What are my cruxes for whether to work on this particular approach?”
“Do those cruxes carve hypothesis space 50%? If not, can I shift my approach so they come closer to 50%, or will take less time to resolve an experiment?”
Things that I don’t yet know for sure if they’ll pay off but I do successfully do most days now:
Asking “How could I have thought that faster?”
(subjectively, I feel like good strategies come to me fairly automatically without much effort, in pretty much the way Tuning your Cognitive Strategies predicted when I started it 6 years ago, although it is hard to verify that from outside)
Observing where most of the time went in a given Lightcone team activity, and asking “what would be necessary to cut this from hours/days, down to ~5 minutes of thought and an automated LLM query?”
Observing places where other Lightcone employees feel cognitively stuck, and often coming with prompts for them that get them unstuck (they self-report as the prompts locally helping them come unstuck, we’ll see over time whether that seems to pay off in a major way)
(Notably, since getting into the groove of this, I’ve also gotten headaches from “overthinking”, and one of my current projects is to learn to more effectively process things in the background and come back to hard things when I’ve had more time to consolidate stuff. Also generally taking more rest in the middle of the day now that I have a clearer sense of my limits)
I am generally thinking of myself as having the goal of doubling Lightcone’s productivity in 12 months (via a combination of these techniques + LLM automation), in a way that should be pretty obvious to the outside world. I don’t actually know that I’ll succeed at that, but holding that as my intention feels very clarifying and useful. I would be interested in operationalizing bets about it.
(People at Lightcone vary in how bought into that goal. I am currently mostly thinking of it as a thing I’m personally aiming for, and getting people bought into it by demonstrating immediate value is one of the subgoals.
But, notably, six months ago I made a prediction: “6 months from now, in the past week, will I have suggested to a Lightcone employee that they make multiple plans and pick the best one?”, and I only gave it 10% because most of my brilliant-seeming ideas don’t actually pan out. But, when the prediction resolved last week, it resolved multiple times in the previous week)