Biggest disagreement between the average worldview of people I met with at EAG and my own is something like “cluster thinking vs sequence thinking,” where people at EAG were often like “but even if we get this specific policy/technical win, doesn’t it not matter unless you also have this other, harder thing?” and I was often more like, “Well, very possibly we won’t get that other, harder thing, but still seems really useful to get that specific policy/technical win, here’s a story where we totally fail on that first thing and the second thing turns out to matter a ton!”
As someone who used to be fully sequence thinking-oriented and gradually came round to the cluster thinking view, I think it’s useful to quote from that post of Holden’s on when it’s best to use which type of thinking:
I see sequence thinking as being highly useful for idea generation, brainstorming, reflection, and discussion, due to the way in which it makes assumptions explicit, allows extreme factors to carry extreme weight and generate surprising conclusions, and resists “regression to normality.”
However, I see cluster thinking as superior in its tendency to reach good conclusions about which action (from a given set of options) should be taken. …
Note that this distinction is not the same as the distinction between explicit expected value and holistic-intuition-based decision-making. Both of the thought processes above involve expected-value calculations; the two thought processes consider all the same factors; but they take different approaches to weighing them against each other. Specifically:
Sequence thinking considers each parameter independently and doesn’t do any form of “sandboxing.” So it is much easier for one very large number to dominate the entire calculation even after one makes adjustments for e.g. expert opinion and other “outside views”...
The two have very different approaches to what some call Knightian uncertainty (also sometimes called “model uncertainty” or “unknown unknowns”): the possibility that one’s model of the world is making fundamental mistakes and missing key parameters entirely…
Also this:
Cluster thinking is more similar to empirically effective prediction methods
Sequence thinking presumes a particular framework for thinking about the consequences of one’s actions. It may incorporate many considerations, but all are translated into a single language, a single mental model, and in some sense a single “formula.” I believe this is at odds with how successful prediction systems operate, whether in finance, software, or domains such as political forecasting; such systems generally combine the predictions of multiple models in ways that purposefully avoid letting any one model (especially a low-certainty one) carry too much weight when it contradicts the others.
here’s a story where we totally fail on that first thing and the second thing turns out to matter a ton!
I’m confused as to why this is inconsistent with sequence thinking. This sounds like identifying a mechanistic story for why the policy/technical win would have good consequences, and accounting for that mechanism in your model of the overall value of working on the policy/technical win. Which a sequence thinker can do just fine.
Sequence thinking can totally generate that, but it seems like it is also prone to this kind of stylized simple model where you wind up with too few arrows in your causal graph and then inaccurately conclude that some parts are necessary and others aren’t helpful.
I worry there’s kind of a definitional drift going on here. I guess Holden doesn’t give a super clean definition in the post, but AFAICT these quotes get at the heart of the distinction:
Sequence thinking involves making a decision based on a single model of the world …
Cluster thinking – generally the more common kind of thinking – involves approaching a decision from multiple perspectives (which might also be called “mental models”), observing which decision would be implied by each perspective, and weighing the perspectives in order to arrive at a final decision. … [T]he different perspectives are combined by weighing their conclusions against each other, rather than by constructing a single unified model that tries to account for all available information.
“Making a decision based on a single model of the world” vs. “combining different perspectives by weighing their conclusions against each other” seems orthogonal to the failure mode you mention. (Which is a failure to account for a mechanism that the “cluster thinker” here explicitly foresees.) I’m not sure if you’re claiming that empirically, people who follow sequence thinking have a track record of this failure mode? If so, I guess I’m just suspicious of that claim and would expect it’s grounded mostly in vibes.
Biggest disagreement between the average worldview of people I met with at EAG and my own is something like “cluster thinking vs sequence thinking,” where people at EAG were often like “but even if we get this specific policy/technical win, doesn’t it not matter unless you also have this other, harder thing?” and I was often more like, “Well, very possibly we won’t get that other, harder thing, but still seems really useful to get that specific policy/technical win, here’s a story where we totally fail on that first thing and the second thing turns out to matter a ton!”
As someone who used to be fully sequence thinking-oriented and gradually came round to the cluster thinking view, I think it’s useful to quote from that post of Holden’s on when it’s best to use which type of thinking:
Also this:
I’m confused as to why this is inconsistent with sequence thinking. This sounds like identifying a mechanistic story for why the policy/technical win would have good consequences, and accounting for that mechanism in your model of the overall value of working on the policy/technical win. Which a sequence thinker can do just fine.
Sequence thinking can totally generate that, but it seems like it is also prone to this kind of stylized simple model where you wind up with too few arrows in your causal graph and then inaccurately conclude that some parts are necessary and others aren’t helpful.
I worry there’s kind of a definitional drift going on here. I guess Holden doesn’t give a super clean definition in the post, but AFAICT these quotes get at the heart of the distinction:
“Making a decision based on a single model of the world” vs. “combining different perspectives by weighing their conclusions against each other” seems orthogonal to the failure mode you mention. (Which is a failure to account for a mechanism that the “cluster thinker” here explicitly foresees.) I’m not sure if you’re claiming that empirically, people who follow sequence thinking have a track record of this failure mode? If so, I guess I’m just suspicious of that claim and would expect it’s grounded mostly in vibes.