Plans Are Predictions, Not Optimization Targets
Imagine a (United States) high-schooler who wants to be a doctor. Their obvious high-level plan to achieve that goal is:
Graduate high school and get into college
Go to college, study some bio/chem/physiology, graduate and get into med school
Go to med school, make it through residency
Doctor!
Key thing to notice about that plan: the plan is mainly an optimization target. When in high school, our doctor-to-be optimizes for graduating and getting into college. In college, they optimize for graduating and getting into med school. Etc. Throughout, our doctor-to-be optimizes to make the plan happen. Our doctor-to-be does not treat the plan primarily as a prediction about the world; they treat it as a way to make the world be.
And that probably works great for people who definitely just want to be doctors.
Now imagine someone in 1940 who wants to build a solid-state electronic amplifier.
Building active solid-state electronic components in the early 1940’s is not like becoming a doctor. Nobody has done it before, nobody knows how to do it, nobody knows the minimal series of steps one must go through in order to solve it. At that time, solid-state electronics was a problem we did not understand; the field was preparadigmatic. There were some theories, but they didn’t work. The first concrete plans people attempted failed; implicit assumptions were wrong, but it wasn’t immediately obvious which implicit assumptions. One of the most confident predictions one might reasonably have made about solid-state electronics in 1940 was that there would be surprises; unknown unknowns were certainly lurking.
So, how should someone in 1940 who wants to build a solid-state amplifier go about planning?
I claim the right move is to target robust bottlenecks: look for subproblems which are bottlenecks to many different approaches/plans/paths, then tackle those subproblems. For instance, if I wanted to build a solid-state amplifier in 1940, I’d make sure I could build prototypes quickly (including with weird materials), and look for ways to visualize the fields, charge densities, and conductivity patterns produced. Whenever I saw “weird” results, I’d first figure out exactly which variables I needed to control to reproduce them, and of course measure everything I could (using those tools for visualizing fields, densities, etc). I’d also look for patterns among results, and look for models which unified lots of them.
Those are strategies which would be robustly useful for building solid-state amplifiers in many worlds, and likely directly address bottlenecks to progress in many worlds. In our particular world, they might have highlighted the importance of high-purity silicon and dopants, or of surfaces between materials with different electrical properties, both of which were key rate-limiting insights along the path to active solid-state electronics.
Now, when looking for those robust bottlenecks, I’d probably need to come up with a plan. Multiple plans, in fact. The point of “robust bottlenecks” is that they’re bottlenecks to many plans, after all. But those plans would not be optimization targets. I don’t treat the plans as ways to make the world be. Rather, the plans are predictions about how things might go. My “mainline plan”, if I have one, is not the thing I’m optimizing to make happen; rather, it’s my modal expectation for how I expect things to go (conditional on my efforts).
My optimization targets are, instead, the robust bottlenecks.
When reality throws a brick through the plans, I want my optimization target to have still been a good target in hindsight. Thus robust bottlenecks: something which is still a bottleneck under lots of different assumptions is more likely to be a bottleneck even under assumptions we haven’t yet realized we should use. The more robust the bottleneck is, the more likely it will be robust to whatever surprise reality actually throws at us.
In My Own Work
Late last year, I wrote The Plan—a post on my then-current plans for alignment research and the reasoning behind them. But I generally didn’t treat that plan as an optimization target; I treated it as a modal path. I chose my research priorities mostly by looking for robust bottlenecks. That’s why I poured so much effort into understanding abstraction: it’s a very robust bottleneck. (One unusually-externally-legible piece of evidence for robustness: if the bottleneck is robust, more people should converge on it over time as they progress on different agendas. And indeed, over the past ~year we saw both Paul Christiano and Scott Garrabrant converge on the general cluster of abstraction/ontology identification/etc.)
Since then, my views have updated in some ways. For instance, after reading some of Eliezer and Nate’s recent writing, I now think it’s probably a better idea to use corrigibility as an AI alignment target (assuming corrigibility turns out to be a coherent thing at all), as opposed to directly targeting human values. But The Plan was to target human values! Do I now need to ditch my whole research agenda?
No, because The Plan wasn’t my optimization target, it was my modal path. I was optimizing for robust bottlenecks. So now I have a different modal path, but it still converges on roughly-the-same robust bottlenecks. There have been some minor adjustments here and there—e.g. I care marginally less about convergent type signatures of values, and marginally more about convergent architectures/algorithms to optimize those values (though those two problems remain pretty tightly coupled). But the big picture research agenda still looks basically similar; the strategy still looks right in hindsight, even after a surprising-to-me update.
Back To The Doctor
At the start of this post, I said that the high-schooler who definitely wanted to be a doctor would probably do just fine treating the plan as an optimization target. The path to becoming a doctor is very standard, the steps are known. It’s not like building a solid-state amplifier in 1940, or solving the AI alignment problem.
On the other hand… man, that high-schooler is gonna be shit outta luck if they decide that medicine isn’t for them. Or if the classes or residency are too tough. Or if they fail the MCAT. Or …. Point is, reality has a way of throwing bricks through our plans, even when we’re not operating in a preparadigmatic field.
And if the high-schooler targets robust bottlenecks rather than optimizing for one particular plan… well, they probably still end up in fine shape for becoming a doctor! (At least in worlds where the original plan was workable anyway—i.e. worlds where courses and the MCAT and whatnot aren’t too tough.) Robust bottlenecks should still be bottlenecks for the doctor plan. The main difference is probably that our high-schooler ends up doing their undergrad in something more robustly useful than pre-med.
Summary
If we treat a single plan as our optimization target, we’re in trouble when reality throws surprises at us. In preparadigmatic fields, where surprises and unknown unknowns are guaranteed, a better idea is to target robust bottlenecks: subproblems which are bottlenecks to a wide variety of plans. The more robust the bottleneck is to different assumptions, the more likely that it will still be a bottleneck under whatever conditions reality throws at us.
Even in better-understood areas, reality does have a tendency to surprise us, and it’s usually not very costly to target robust bottlenecks rather than a particular plan anyway. After all, if the bottleneck is robust, it’s probably a bottleneck in whatever particular plan we would have targeted.
So what’s the point of a plan? A plan is no longer an optimization target, but instead a prediction—a particular possible path. Sometimes it’s useful to discuss a “mainline plan”, i.e. a modal path. And in general, we want to consider many possible plans, and look for subproblems which are bottlenecks to all of them.
- The Plan − 2022 Update by 1 Dec 2022 20:43 UTC; 239 points) (
- A rough and incomplete review of some of John Wentworth’s research by 28 Mar 2023 18:52 UTC; 175 points) (
- The Plan − 2023 Version by 29 Dec 2023 23:34 UTC; 146 points) (
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- EA & LW Forums Weekly Summary (17 − 23 Oct 22′) by 25 Oct 2022 2:57 UTC; 35 points) (EA Forum;
- A rough and incomplete review of some of John Wentworth’s research by 28 Mar 2023 18:52 UTC; 27 points) (EA Forum;
- 4 Nov 2024 21:59 UTC; 21 points) 's comment on Explore More: A Bag of Tricks to Keep Your Life on the Rails by (
- EA & LW Forums Weekly Summary (17 − 23 Oct 22′) by 25 Oct 2022 2:57 UTC; 10 points) (
- 6 Nov 2022 0:10 UTC; 4 points) 's comment on Ulisse Mini’s Shortform by (
I really like this post, this is very influential about how I think about plans, and what to work on. I do think its a bit vague though, and lacking in a certain kind of general formulation. It may be better if there were more examples listed where the technique could be used.