The Three Stages Of Model Development
TLDR: First you go with your gut, then you get a logical model, then you improve that model. Trusting your logical model over your gut before it gets good enough is a very common way to believe wrong things.
[epistemic status: probably approximately true, with possible pathological cases around the edges]
The process of getting better at describing and predicting things seems to usually go something like this:
First, you start out with an intuitive model which nature gives you automatically without any effort on your part. This model uses the language of system 1, and is a black box whose contents are unknown to you.
Then, you develop a weak analytical model in the language of your system 2. Your first try at making an analytical model is generally worse on average at describing and predicting things than your intuitive model, which is why I’m calling it “weak”.
Finally, after incrementally improving your analytical model over some period of time, you end up with a strong analytical model, a system that surpasses your intuition.
Advantages
Analytical models are good because they can be easily improved relative to intuitive ones. For example, it is hard to convince your system 1 that getting a vaccine shot is a good idea, but your system 2 can improve its understanding of the world to a point where it understands that getting the shot is worth it.
Analytical models are also nice because you can see how their parts work, which makes it easier to apply lessons learned in one area to another problem.
A model will do better in some situations than others, so whether you should use your intuitive one or your analytical one can be dependent on your situation. The process of figuring out a model’s better and worse subjects is beyond the scope of this post.
Composition
All analytical models are ultimately composed out of intuitive models. Maybe you start with an intuitive understanding of what bleggs and rubes are, but then quickly come up with the analytical model that says that bleggs are “objects that are round and blue”, while rubes are “objects that are cubes and are red”. This model doesn’t analytically define what “round”, “cube”, “red”, and “blue” mean yet! Those are defined intuitively to start. But when you go back and define, say, what “blue” means in terms of light and human eyes, you have to define light and eyes intuitively. And so on and so on.
In general, the more you improve your model, the deeper that model becomes. This is because the universe happens to be really complicated and you need detail to cover all the nuances.
Personalities
Some people tend to trust their intuitions more, while others trust logic more. In the short run, intuitive people are better modelers because competing with nature-given models is hard. In the long run, analytical people are better modelers because they can continue to improve more and more over time while intuitive people mostly can’t.
The Weak Model Trap
The big trap that people who are inclined to be analytical are likelier to fall into is trusting their analytical models before those models have become mature enough.
For example, I’ve seen a physics student, upon learning that “an object in motion remains in motion unless acted upon by an outside force”, predict that rolling a ball around the inside of a pie tin with a quarter of the rim cut out will result in the ball floating around in a curved path to continue in a circle. This student rejected their gut feeling that the ball will fall out because they favored their mistaken model of physics.
This “weak model trap” seems especially common when trying to understand human values. Adherents to naive utilitarianism seem to be victims of the trap. Also, the argument that death isn’t bad because you aren’t around to experience it is very clever, but fundamentally misses the point in a way that your gut instinct which says “death is bad” does not.
A Hope
I can’t just recommend that you listen to your gut more often indiscriminately. But I do think it would be better for people to be more aware of what they’re doing when they go around biting bullets for their analytical model. I hope this model of models will make you pause and reconsider going against your instincts, so you might be less likely to trust a bad model.
---
(This is a heavily revised version of this Tumblr post of mine: paradigm-adrift.tumblr.com/post/163145257740/paradigm-adrift-it-seems-like-theres-this)
It’s not impossible to train intuitive models. In most fields most experts have good intuitive models and I see no reason to believe your claim that strong analytical models are categorically better than strong intuitive one’s.
A common schema of learning would be to say that there’s:
unconscious incompetence,
conscious incompetence
conscious competence
unconscious competence
In that model unconscious competence is better than conscious competence.
Related paper: Conditions for Intuitive Expertise (Kahneman & Klein 2009).
Related: Shell, Shield, Staff, Singularity Mindset. The difference seems to be that your Stage 3 remains in System 2, whereas my current take on the stages is:
Stage 1: bad intuitive model in System 1.
Stage 2: scaffold and practice a good analytical model in System 2.
Stage 3: push System 2 model back into System 1 so that it becomes a good intuitive model. Eventually the System 2 scaffolding itself is taken down.
This also aligns with my model of model-building. There is a related idea in mathematics about mathematicians going through pre-formal, formal and post-formal stages: They start out not using rigorous proofs and make a lot of mistakes because of it, then they learn how to use proofs to think rigorously, and ultimately matured mathematicians don’t usually need to think in rigorous proofs anymore, but instead think mostly based on intuitions, for which they can create rigorous proofs on-demand, if needed.
Yeah, I was just reading exactly about this on Terence Tao’s blog.
One problem I occasionally face in the world, is talking to people who are in post-formal stages when I am only pre-formal, but due to not noticing inferential gaps, treat me as though we’re both on the same level. Such conversations can be awkward. I usually use my skill of ‘not minding sounding stupid’ to gain advantage there.
Promoted to frontpage.
One advantage of having both a weak intuitive model and a weak analytical model is that you can notice where there are mismatches in their predictions and flag them as places where you’re confused.
This helps with making predictions about specific cases. In cases where your intuitive naive physics and your s2 sense of “objects in motion tend to remain in motion” make the same prediction, they’re usually right. In cases where they disagree, you now have a trigger to look into the case in a more careful, detailed way rather than relying on either cached model.
It also helps with upgrading your models. Instead of waiting to be surprised by reality when it contradicts what your model predicted, you can notice as soon as your intuitive and analytical models disagree with each other and look for ways to improve them.