Say you are working on formulating a new scientific theory. You’re not there yet, but you broadly know what you want: a simple theory that powerfully compresses the core phenomenon, and suggests a myriad of new insights.
If you’re anything like me, at least part of you now pushes for focusing on simplicity from the get go. Let’s aim for the simplest description that comes easily, and iterate from that.
Did you catch the jump?
I started with a constraint on the goal — a simple theory — and automatically transmuted it into a constraint on the path — simple intermediary steps.
I confused “Finding a simple theory” with “Finding a simple theory simply”.
After first uncovering this in my own reasoning, I now see this pattern crop everywhere:
(Well-calibration) I want to be well-calibrated. And it feels like I should thus ensure to only make judgments and claims I’m sure are well-calibrated. Yet if I do that, I’ll end up protecting the parts of my model where the confusion and the uncertainty lies, and starving my brain of updates.
(Coherence) Another valuable property of mature scientific theories is coherence. And it feels like I should thus ensure that all my intermediary theories are coherent. Yet many examples in the history of science show that incoherent frames and theories can prove key in generating coherent and useful ones: Newton’s early calculus, Heaviside’s operational calculus, Kekule’s structure of benzene, Bohr’s atom, Dirac’s delta).
(Winning) In competitive games (sports, board games, esports), the goal is to win. And it feels like becoming a winner means consistently winning. Yet if I aim to only win, the best course of action avoids stronger opponents and the valuable lessons they can explicitly and implicitly teach.
In the language of moral philosophy, this pattern amounts to adding a deontological spin on a consequentialist value. Even though it can prove valuable (in AI alignment for example), doing this unconsciously and automatically leads to problems after problems.
Confusing the goal and the path
Link post
Say you are working on formulating a new scientific theory. You’re not there yet, but you broadly know what you want: a simple theory that powerfully compresses the core phenomenon, and suggests a myriad of new insights.
If you’re anything like me, at least part of you now pushes for focusing on simplicity from the get go. Let’s aim for the simplest description that comes easily, and iterate from that.
Did you catch the jump?
I started with a constraint on the goal — a simple theory — and automatically transmuted it into a constraint on the path — simple intermediary steps.
I confused “Finding a simple theory” with “Finding a simple theory simply”.
After first uncovering this in my own reasoning, I now see this pattern crop everywhere:
(Well-calibration) I want to be well-calibrated. And it feels like I should thus ensure to only make judgments and claims I’m sure are well-calibrated. Yet if I do that, I’ll end up protecting the parts of my model where the confusion and the uncertainty lies, and starving my brain of updates.
(Coherence) Another valuable property of mature scientific theories is coherence. And it feels like I should thus ensure that all my intermediary theories are coherent. Yet many examples in the history of science show that incoherent frames and theories can prove key in generating coherent and useful ones: Newton’s early calculus, Heaviside’s operational calculus, Kekule’s structure of benzene, Bohr’s atom, Dirac’s delta).
(Winning) In competitive games (sports, board games, esports), the goal is to win. And it feels like becoming a winner means consistently winning. Yet if I aim to only win, the best course of action avoids stronger opponents and the valuable lessons they can explicitly and implicitly teach.
In the language of moral philosophy, this pattern amounts to adding a deontological spin on a consequentialist value. Even though it can prove valuable (in AI alignment for example), doing this unconsciously and automatically leads to problems after problems.
For it conjures obstacles that were never there.
This post is part of the work done at Conjecture.
Thanks to Clem for giving me the words to express this adequately.