Intuition should be applied at the lowest possible level
Earlier today I lost a match at Prismata, a turn-based strategy game without RNG. When I analyzed the game, I discovered that changing one particular decision I had made on one turn from A to B caused me to win comfortably. A and B had seemed very close to me at the time, and even after knowing for a fact that B was far superior, it wasn’t intuitive why.
Then I listed the main results from A and B, valued those by intuition, and immediately B looked way better.
One can model these problems on a bunch of different levels, where going from level n to n+1 means hiding the details of level n and approximating their results in a cruder way. On level 1, one would compare the two subtrees whose roots are decisions A and B (this should work just like in chess). Level 2 would be looking at exact resource and attack numbers in subsequent turns. Level 3 would be categorizing the main differences of A and B and giving them intuitive values, and level 4 deciding between A and B directly. What my mistake showcases is that, even in a context where I am quite skilled and which has limited complexity, applying intuition at level 4 instead of 3 lead to a catastrophic error.
If you can’t go lower, fine. But there are countless cases of people using intuition on a level that’s unnecessarily high. Hence if it’s worth doing, it’s worth doing with made-up numbers. That is just one example of where applying intuition one level further down: “what quantity of damage arises from this” rather than “how bad is it” can make a big difference. On questions of medium importance, briefly asking yourself “is there any point where I apply intuition on a level that’s higher than necessary” seems like a worthy exercise.
Meta: I write this in the spirit of valuing obvious advice, and the suspicion that this error is still made fairly often.
I think this is a really interesting framing. I like to do something that seems related but slightly different. Where I see what you’re describing as something like “explicitly (system2) take something down one level (potentially into smaller pieces), and apply intuition (system 1) to each of the pieces”, I like to do “explicitly (system 2) consider the problem from a number of different angles / theories, and try applying intuition (system 1) to each angle, and see whether the results agree or how they differ.”
To give an example, because I think I’m being too abstract: If I am thinking of making an investment decision, I won’t just query my intuition “is this a good investment?” because it doesn’t necessarily have useful things to say about that. Instead I will query it “how does this seem to compare to an equity index fund”, and “what does an adequacy analysis say about whether there could plausibly be free money here”, and “how does this pattern-match against scams I’m familiar with”, and “what does the Outside View say happens to people who make this type of investment”, and “what does Murphyjitsu predict I will regret if I invest thusly?” This seems similar to your described approach, if not quite the same.
That looks like a more general approach to me, where going one level deeper could be one of the angles considered, and appealing to the outside view another.
This looks like an effect of computational costs, not a strategic mistake. Listing the results of two decisions costs time / cognitive effort (i.e., computation); applying a heuristic (intuitively compare action A to action B) is computationally cheaper, but—as you discovered—more error-prone.
Thus, though you chide people for “using intuition on a level that’s unnecessarily high” [emphasis mine], in fact applying intuition (i.e. heuristics) on a higher level may be quite necessary, for boundedness-of-rationality / computational-cost reasons.
That’s why I said it should be used “on questions of medium importance”. For small recurring decisions, the computational cost could be too high, and for life-changing decisions, one would hopefully have covered this ground already (although on reflection, there are probably counter-arguments here, too). But for everything that we explicitly spend some time on anyway, not bothering to list consequences seems like a strategic mistake to me. Even in the example I used with only 45 seconds available for each turn, I had enough time to do this. And I did spend some time on this decision, I just used it to double and triple check with my intuition, rather than going lower.
I reflexively tried to reverse the advice, and found it surprisingly hard to think of situations where applying higher level intuition would be better.
There’s an excerpt by chess GM Michael Tal:
But this is a somewhat contrived example since this is reminiscent of the pre-rigor, rigor, and post-rigor phases of Mathematics (or more generally, in mastering any skill). And one could argue chess GMs have so thoroughly mastered the lower levels that they can afford to skip them without making catastrophic errors.
Another example that comes to mind is Marc Andreessen in the introduction to Breaking Smart:
It seems quite clear that Jobs wasn’t applying intuition at the lowest level here. And it seems like the end result could have ended up worse off if he ended up applying intuition at lower levels. He even explicitly says:
I find neither examples I came up with convincing. But are there circumstances where applying intuition at lower levels is a strategic mistake?
Applying intuition at lower levels is a strategic mistake when you are substantially more certain that your high-level intuition is well-honed, than you are of your ability to explicitly decompose the high level into lower-level components.
(It can also be a strategic mistake for computational-cost reasons, as I outline in my other comment.)
Also, why is the Steve Jobs example unconvincing? It seems, in fact, an example of the sort of thing I am talking about.
Here’s something that Bruce Tognazzini (HCI expert and author of the famous Apple Human Interface Guidelines) said about Steve Jobs:
Had you asked Steve Jobs to break down his intuitions into lower-level components, and then evaluate those, he may well have failed. And yet he made incredible, groundbreaking, visionary products, again and again and again. He had good reason to be confident in his high-level intuitions. Why would he want to discard those, and attempt a lower-level analysis?
I had worded it somewhat poorly, I wasn’t intending to say that Steve Jobs should have attempted a lower level analysis in technology design.
I just found it unconvincing in the sense that I couldn’t think of an example where applying lower level intuitions was a strategic mistake for me in particular. As you mention in your other comment, I am not substantially more certain that my high-level intuition is well-honed in any particular discipline.
More generally, Steve Jobs’ consistently applied high-level intuition to big life decisions too ― as evidenced by his commencement speech. It on the whole worked out for him I guess, but he also did try to cure his cancer with alternative medicine which he later regretted.
I completely agree with your computational tradeoff comment though.