Context: quickly written up, less original than I expected it to be, but hey that’s a good sign. It all adds up to normality.
The concept of “strategic clarity” has recently become increasingly important to how I think. It doesn’t really have a precise definition that I’ve seen—as far as I can tell it’s mostly just used to point to something roughly like “knowing what the fuck is going on”. Personally, the strongest association I have with the term is a pointer away from a certain undesirable state. When I feel like the problem I’m thinking about is a big muddle, and I don’t have a clear way to make progress because there are no good starting points with solid footing, I call that “lack of strategic clarity”.
Anecdotally, it seems like I rarely produce thinking that I consider good and useful when I try to just charge through a lack of strategic clarity. I often get frustrated, or wind up on claims whose truthfulness I don’t know how to assess, or just freeze up and find myself staring blankly out the window all of a sudden. It’s gotten to the point where I pretty much never do that anymore—when I feel muddled, I almost instinctively flee back up the abstraction ladder, and think about ways I could approach the problem that wouldn’t feel so muddled.
But there’s a hole in this that bothers me. I don’t have a crisp definition for what “strategic clarity” actually is! I’m fine with using it as a loose felt-sense label when it’s not very load-bearing, but if I’m frequently going out and searching for strategic clarity, I’d better know what exactly I’m looking for!
So if you’re willing to tolerate a frankly heinous degree of abstraction, join me as I attempt to get strategic clarity on how to get strategic clarity. Ugh just looking at that sentence makes me want to vo-
What is Strategic Clarity?
The number one concrete example I have for a topic where I lack strategic clarity is AI strategy/governance. When I try to think about AI strategy, the thing that jumps out at me first is the massive empirical uncertainty. How will the first AGI be developed? Who will develop it? When? How fast? What will it be used for?
But if I imagine I had a magic box that could give me answers to these questions, I don’t think the lack-of-strategic-clarity feeling would go away. Combining all those answers into a strategy still feels very muddled. Lottery tickets have tons of empirical uncertainty, but I feel pretty darn clear about whether I should buy lottery tickets or not. Besides, I can think of examples where I lack strategic clarity with very little empirical uncertainty. I never had a Rubik’s cube phase, and to this day don’t know how to solve one. If I imagine sitting down with a Rubik’s cube, I know exactly what every move does, and exactly where I’m trying to go, but it still feels like a big unclear mess when I try to figure out what to do. I can’t identify any sub-problems, or even moves that would make my situation less scrambled—the task still feels monolithic and unresponsive. Before making any real attempt at a solution, I would have to spend a while figuring out what’s even going on, until those things go away.
“Monolithic and unresponsive” feels like it’s getting at the heart of the problem, but it also feels like a redundant description. Sitting with a Rubik’s cube does feel both monolithic and unresponsive, but I think it’s unresponsive because it’s monolithic. If I could break down the problem into sub-problems, that would make it much easier to solve—each sub-problem would have fewer degrees of freedom to deal with, and would be easier to think through. Maybe I could even break it down further, until the sub-problems are easy enough to solve by inspection.
Now that I think about it, this is usually how naive attempts to solve a Rubik’s cube go! You try and solve one color first. Sometimes you even succeed. But then it’s much harder to do anything to the other colors without screwing up your one solved color. You can technically decompose the problem into six sub-problems if you want, but this doesn’t actually make anything easier because you can’t solve the problems independently. It doesn’t save you any interactions to think about—you still need to consider the global situation to solve one side at a time.
This also matches with my impression of why AI strategy is hard. The problem is obviously too big for me to solve by inspection. But when I try to think about any particular sub-problem, it doesn’t simplify things by that much. Imagine I try to factor the problem into legislative and non-legislative measures—this doesn’t save me any interactions, because the ideal legislation depends in large part on the non-legislative situation, and vice versa. You can’t solve for them independently. One example of a factorization that’s pretty good is the technical/strategy distinction—solving the technical alignment problem is a clearly necessary sub-goal, and mostly separate from the strategy problem. The “alignment tax” is one important example of how the two sub-problems do still have interactions.
This is my current theory of strategic clarity—it’s the art of finding sub-problems whose interactions matter little or not at all. I’m also hesitantly partial to a variety of pithier but more figurative summaries: factoring problems, or cleaving problems at the joints.
And How Do I Get It?
There are a bunch of things I’ve been doing to try and get strategic clarity, which now make sense in this more explicit framing. Maybe I’ll even be able to come up with some new ones, or get better at using these tools.
Back-chaining: the essence of back-chaining is asking “to get Z, what X and Y should I get first?” For example, if you can identify some last step you always need to take to solve the problem, then you’ve reduced the problem into 1) setting up for the last step and 2) taking that last step. If you can identify instrumental goals that are highly useful, you’ve mostly-reduced the problem to 1) achieving the instrumental goal and 2) finishing the job from there. There are ton of different ways to back-chain, and they’re all great.
Definitions for success: this reminds me a lot of back-chaining, but I want to list it separately because I use it slightly differently. When I have a goal that’s expressed in big fuzzy words, instead of some precise outcome like the solution of a Rubik’s cube, it helps to try and come up with a concrete success criterion. What exactly do I mean when I say I want to “do AI safety”? If someone asked me to nail down what qualifies a world as “AI safe”, could I answer? Often, in this process, I end up either identifying sub-problems, or making things more concrete so that I can back-chain.
Dominating terms: this isn’t quite a proper “factorization”, because there’s only one factor, but I definitely still want to list it because I use it a Lot. If you identify one sub-problem, and the results of that sub-problem determine most of the outcome you care about, then you can ignore everything else (for now) and save yourself some interactions.
Limiting factors: sort of a special case on finding a dominating term, but specifically at the current margins. A limiting factor means that your output won’t increase unless you improve that factor. For now, you can ignore everything else—you’re not going to get improvement until you fix this one specific thing.
Possibility trees: unless you’ve got something really gnarly going on, alternate timelines of how things might unfold generally don’t interact. “How would I solve AI strategy if timelines were fast” and “how would I solve AI strategy if timelines were slow” are very separate sub-problems, and breaking the problem down this way mostly just cuts out one dimension of uncertainty. Repeat until satisfied.
Conservative assumptions: if you suspect a particular dimension of uncertainty may not be very decision-affecting, try evaluating what you would do if it was far to one extreme, and then what you would do if it were far to the other extreme. If the actions are the same and the mass of probability outside those error bars is sufficiently low, then you can mostly ignore it.
Black-box assumptions: pretend you have a magic black box that solves a particular sub-problem. Does this make it easier to solve the remaining problem, or do the particular details of the black box matter a lot for how you approach the remaining difficulty? A black box that puts half the pieces in a jigsaw puzzle into the right places is super useful. A black box that puts half the squares on a Rubik’s cube into the right places is not. If you can think of a black box you’d be happy to have, you’ve got a factorization! Related, but not identical, to finding instrumental goals.
In Search of Strategic Clarity
Link post
Context: quickly written up, less original than I expected it to be, but hey that’s a good sign. It all adds up to normality.
The concept of “strategic clarity” has recently become increasingly important to how I think. It doesn’t really have a precise definition that I’ve seen—as far as I can tell it’s mostly just used to point to something roughly like “knowing what the fuck is going on”. Personally, the strongest association I have with the term is a pointer away from a certain undesirable state. When I feel like the problem I’m thinking about is a big muddle, and I don’t have a clear way to make progress because there are no good starting points with solid footing, I call that “lack of strategic clarity”.
Anecdotally, it seems like I rarely produce thinking that I consider good and useful when I try to just charge through a lack of strategic clarity. I often get frustrated, or wind up on claims whose truthfulness I don’t know how to assess, or just freeze up and find myself staring blankly out the window all of a sudden. It’s gotten to the point where I pretty much never do that anymore—when I feel muddled, I almost instinctively flee back up the abstraction ladder, and think about ways I could approach the problem that wouldn’t feel so muddled.
But there’s a hole in this that bothers me. I don’t have a crisp definition for what “strategic clarity” actually is! I’m fine with using it as a loose felt-sense label when it’s not very load-bearing, but if I’m frequently going out and searching for strategic clarity, I’d better know what exactly I’m looking for!
So if you’re willing to tolerate a frankly heinous degree of abstraction, join me as I attempt to get strategic clarity on how to get strategic clarity. Ugh just looking at that sentence makes me want to vo-
What is Strategic Clarity?
The number one concrete example I have for a topic where I lack strategic clarity is AI strategy/governance. When I try to think about AI strategy, the thing that jumps out at me first is the massive empirical uncertainty. How will the first AGI be developed? Who will develop it? When? How fast? What will it be used for?
But if I imagine I had a magic box that could give me answers to these questions, I don’t think the lack-of-strategic-clarity feeling would go away. Combining all those answers into a strategy still feels very muddled. Lottery tickets have tons of empirical uncertainty, but I feel pretty darn clear about whether I should buy lottery tickets or not. Besides, I can think of examples where I lack strategic clarity with very little empirical uncertainty. I never had a Rubik’s cube phase, and to this day don’t know how to solve one. If I imagine sitting down with a Rubik’s cube, I know exactly what every move does, and exactly where I’m trying to go, but it still feels like a big unclear mess when I try to figure out what to do. I can’t identify any sub-problems, or even moves that would make my situation less scrambled—the task still feels monolithic and unresponsive. Before making any real attempt at a solution, I would have to spend a while figuring out what’s even going on, until those things go away.
“Monolithic and unresponsive” feels like it’s getting at the heart of the problem, but it also feels like a redundant description. Sitting with a Rubik’s cube does feel both monolithic and unresponsive, but I think it’s unresponsive because it’s monolithic. If I could break down the problem into sub-problems, that would make it much easier to solve—each sub-problem would have fewer degrees of freedom to deal with, and would be easier to think through. Maybe I could even break it down further, until the sub-problems are easy enough to solve by inspection.
Now that I think about it, this is usually how naive attempts to solve a Rubik’s cube go! You try and solve one color first. Sometimes you even succeed. But then it’s much harder to do anything to the other colors without screwing up your one solved color. You can technically decompose the problem into six sub-problems if you want, but this doesn’t actually make anything easier because you can’t solve the problems independently. It doesn’t save you any interactions to think about—you still need to consider the global situation to solve one side at a time.
This also matches with my impression of why AI strategy is hard. The problem is obviously too big for me to solve by inspection. But when I try to think about any particular sub-problem, it doesn’t simplify things by that much. Imagine I try to factor the problem into legislative and non-legislative measures—this doesn’t save me any interactions, because the ideal legislation depends in large part on the non-legislative situation, and vice versa. You can’t solve for them independently. One example of a factorization that’s pretty good is the technical/strategy distinction—solving the technical alignment problem is a clearly necessary sub-goal, and mostly separate from the strategy problem. The “alignment tax” is one important example of how the two sub-problems do still have interactions.
This is my current theory of strategic clarity—it’s the art of finding sub-problems whose interactions matter little or not at all. I’m also hesitantly partial to a variety of pithier but more figurative summaries: factoring problems, or cleaving problems at the joints.
And How Do I Get It?
There are a bunch of things I’ve been doing to try and get strategic clarity, which now make sense in this more explicit framing. Maybe I’ll even be able to come up with some new ones, or get better at using these tools.
Back-chaining: the essence of back-chaining is asking “to get Z, what X and Y should I get first?” For example, if you can identify some last step you always need to take to solve the problem, then you’ve reduced the problem into 1) setting up for the last step and 2) taking that last step. If you can identify instrumental goals that are highly useful, you’ve mostly-reduced the problem to 1) achieving the instrumental goal and 2) finishing the job from there. There are ton of different ways to back-chain, and they’re all great.
Definitions for success: this reminds me a lot of back-chaining, but I want to list it separately because I use it slightly differently. When I have a goal that’s expressed in big fuzzy words, instead of some precise outcome like the solution of a Rubik’s cube, it helps to try and come up with a concrete success criterion. What exactly do I mean when I say I want to “do AI safety”? If someone asked me to nail down what qualifies a world as “AI safe”, could I answer? Often, in this process, I end up either identifying sub-problems, or making things more concrete so that I can back-chain.
Dominating terms: this isn’t quite a proper “factorization”, because there’s only one factor, but I definitely still want to list it because I use it a Lot. If you identify one sub-problem, and the results of that sub-problem determine most of the outcome you care about, then you can ignore everything else (for now) and save yourself some interactions.
Limiting factors: sort of a special case on finding a dominating term, but specifically at the current margins. A limiting factor means that your output won’t increase unless you improve that factor. For now, you can ignore everything else—you’re not going to get improvement until you fix this one specific thing.
Possibility trees: unless you’ve got something really gnarly going on, alternate timelines of how things might unfold generally don’t interact. “How would I solve AI strategy if timelines were fast” and “how would I solve AI strategy if timelines were slow” are very separate sub-problems, and breaking the problem down this way mostly just cuts out one dimension of uncertainty. Repeat until satisfied.
Conservative assumptions: if you suspect a particular dimension of uncertainty may not be very decision-affecting, try evaluating what you would do if it was far to one extreme, and then what you would do if it were far to the other extreme. If the actions are the same and the mass of probability outside those error bars is sufficiently low, then you can mostly ignore it.
Black-box assumptions: pretend you have a magic black box that solves a particular sub-problem. Does this make it easier to solve the remaining problem, or do the particular details of the black box matter a lot for how you approach the remaining difficulty? A black box that puts half the pieces in a jigsaw puzzle into the right places is super useful. A black box that puts half the squares on a Rubik’s cube into the right places is not. If you can think of a black box you’d be happy to have, you’ve got a factorization! Related, but not identical, to finding instrumental goals.