Note, I consider this post to be “Lynette speculates based on one possible model”, rather than “scientific evidence shows”, based on my default skepticism for psych research.
A recent Astral Codex Ten post argued that advice is written by people who struggle because they put tons of time into understanding the issue. People who succeeded effortlessly don’t have explicit models of how they perform (section II). It’s not the first time I’ve seen this argument, e.g. this Putanumonit post arguing that explicit rules help poor performers, who then abandon the rules and just act intuitively once they become good.
This reminded me of a body of psych research I half-remembered from college called Choking under Pressure.
My memory was that if you think about what you’re doing too much after becoming good, then you do worse. The paper I remembered from college was from 1986, so I found “Choking interventions in sports: A systematic review” from 2017.
It turns out that I was remembering the “self-focused” branch of choking research.
“Self-focus approaches have largely been extended from Baumeister’s (1984) automatic execution hypothesis. Baumeister explains that choking occurs because, when anxiety increases, the athlete allocates conscious attention to movement execution. This conscious attention interferes with otherwise automatic nature of movement execution, which results in performance decrements.”
(Slightly worrying. I have no particular reason to doubt this body of work, but Baumeister’s “willpower as muscle”—i.e. ego depletion—work hasn’t stood upwell.)
Two studies found that distraction while training negatively impacted performance. I’m not sure if this this was supposed to acclimatize the participants to distractions while performing or reduce their self-focus while training. (I’m taking the paper’s word and not digging beyond the surface on the numbers.) Either way, I feel very little surprise that practicing while distracted was worse. Maybe we just need fatal-car-crash magnitude effects before we notice that focus is good?
Which makes it all the more surprising that seven of eight studies found that athletes performed better under pressure if they simultaneously did a second task (such as counting backwards). (The eighth study found a null result.) According to the theory, the second task helped because it distracted from self-focus on the step-by-step execution.
If this theory holds up, it seems to support paying deliberate attention to explicit rules while learning but *not* paying attention to those rules once you’re able to use them intuitively (at least for motor tasks). In other words, almost exactly what Jacob argued in the Putanumonit article.
Conclusions
I was intrigued by this argument because I’ve argued that building models is how one becomes an expert.[1] After considering it, I don’t actually think the posts above offer a counter argument to my claim.
My guess is that experts do have models of skills they developed, even if they have fewer models (because they needed to explicitly learn fewer skills). The NDM method for extracting experts’ models implies that the experts have models that can be coaxed out. Holden’s Learning By Writing post feels like an explicit model.
Another possibility is that experts forget the explicit models after switching to intuition. If they faced the challenges more than five or ten years ago, they may not remember the models that helped them then. Probably uncoincidentally, this aligns neatly with Cal Newport’s advice to seek advice from someone who recently went through the challenges you’re now facing because they will still remember relevant advice.
Additionally, the areas of expertise I care about aren’t like walking, where most people will effortlessly succeed. Expertise demands improving from where you started. Both posts and the choking under pressure literature agree that explicit models help you improve, at least for a while.
“Find the best explicit models you can and practice until you don’t need them” seems like a reasonable takeaway.
[1] Note, there’s an important distinction between building models of your field and building models of skills. It seems like the main argument mostly applies to models of skills. I doubt Scott would disagree that models of fields are valuable, given how much time he’s put into developing his model of psychopharmacology.
The choking under pressure results are all about very fast athletic tasks where smoothness is critical. Most cognitive tasks will have enough time to think about both rules and then separately about intuitions/automatic skills. So getting benefit from both is quite possible.
I apologize in advance for the lengthy and tangential reply.
Gerd Gigerenzer offers a counterpoint—expertise in orderly systems is very different from expertise in complex systems (such as sports or financial markets). In the latter heuristics and System 1 type thinking performs better, quite simply explicit modelling is too inefficient or incapable of dealing with all the differing factors.
“In a world of known risk, everything, including the probabilities, is known for certain. Here, statistical thinking and logic are sufficient to make good decisions. In an uncertain world, not everything is known, and one cannot calculate the best option. Here, good rules of thumb and intuition are also required.” Gerd Gigerenzer—Risk Savvy: How to Make Good Decisions
A sporting related example he gives is that if catcher in baseball simply fixes his eyes on the ball and runs towards it, he doesn’t need to explicitly calculate the trajectory of the ball. While you could argue that indirectly the calculation is performed by the player’s proprioceptors and Vestibular system, I think that it’s certain it’s not “explicit”.
However expertise tends to be narrow, I’m thinking of that overused Niels Bohr quote about how an expert is someone who has learned the hard way every mistake possible in a narrow field. Or in the Cynefin framework that you have “Simple” “Complicated” “Complex” and “Chaotic” systems, and “Simple” sits on a cliff next to “Chaotic” in the paradigm because once the constraints are removed, the expertise or best practice that works predictably in Simple systems falls apart.
This can be exploited for competitive gain. I’m sure this all ties back to OODA loops. Double Formula One World Champion Fernando Alonso like most elite sportsmen is extremely competitive and he claims that even when he plays against professional tennis players he still needs to “kill their strength”. And to do this he operates outside of their comfort zone:
“I used to play tennis, and when I play with someone good, I would put the ball very high. Because, like this, you stop the rhythm of them because they are used to hitting the ball very hard.
“Playing with professionals, the ball arrives very strong for them so they are used to that kind of shot.
“But when you put the ball high, they make mistakes, because the ball arrives very soft. So I can play better tennis when putting the ball high.
“Putting the ball high is my only chance to beat them. So I do that automatically.
“It’s not only on racing I just need to destroy the strengths of the others, and try to maximize mine.”
At the risk of throwing in another tangent Marvin Minsky’s idea of negative expertise—that the mind is comprised more of ‘critic circuits’ that supress certain impulses more than positive or attractive circuits—to prevent us babbling or experimenting with strategies or tactics that haven’t worked before. This is why when we think of leaving a room, we don’t consider the window, even though it is a means—we opt for the much more expeditious door.
Expertise is more about what not to do than what one should do.[1]
I think what Alonso is doing here is he’s exploiting or rather inverting Negative Expertise, these professional players have trained in a narrow band of situations—playing against other elite players—and have a intuitive bag of tricks to play against them. Alonso instead forces them to play in a way they are not trained.
I wonder if choking is just that—that there has been something in the environment which they weren’t trained for. It’s not that they are overthinking—it’s that they can’t rely on intuition because there isn’t a precedent?
Another explanation for choking I see is that it isn’t conscious at all? Maybe I’ve been too influenced by the Hollywood movie trope of the player at the championship game, somehow locking eyes with his estranged wife in the grandstand, and being so overcome by a wellspring of feelings that he screws up the play. This may be why I assume Choking is related to anxiety. And anxiety is a whole-body experience, not merely “thought”. It is somatic. It affects your endocrine system, your cardiovascular system etc. etc. It is perhaps the body driving the thoughts just as much as the thoughts driving the body?
At any rate, I think when it comes to building models firstly one needs to identify which systems one is operating in—those which there are known risks, or high uncertainty. In the latter it would seem the first priority is focusing on what the circumference is of “optimal” operation (i.e. “don’t step over that line” “avoid the impulse to...”) and then finding heuristics rather than explicit models.
Interestingly, Refutative Instruction has been very profitable for John Cleese and Antony Jay, both as comedians who made comedy from the wrong way to run a hotel or a government ministry, and as businessmen who made actual industrial training videos that showed students the wrong way to do something before showing them best practice.
While the pedagogical value might simply come from the fact it is “entertaining” I am inclined to believe that it is also effective in the same way that Minsky’s Negative Expertise theory explains how learning works. (I invite you to draw your own comparisons to the kairos of Plato’s Dialogues.)
Occasionally, I get asked for feedback on someone’s resume. I’m not really a resume-editing coach, but I can ask them what they accomplished in that roll where they’re just listing their duties. Over time, I’ve found I’m completely replaceable with this rock.
You did X for Y company? Great! Why is it impressive? Did it accomplish impressive outcomes? Was it at an impressive scale? Did it involve impressive other people or companies?
You wrote an application using Z software? Great! Why is it impressive? Did the code speed up run time by an impressive amount? Did it save an impressive amount of money? Did it enable an impressive research finding? Does it display an impressive amount of technical expertise?
You published a thing? Great! Why is it impressive? Was it published somewhere impressive? Was it cited an impressive number of times? Did impressive people say good things about it?
I’m really good at taking an abstract goal and breaking it down into concrete tasks. Most of the time, this is super useful.
But if I’m not sure what would accomplish the high level goal, sometimes the concrete goals turn out to be wrong. They don’t actually accomplish the high-level more vague goal. If I don’t notice that, I’ll at best be confused. At worse, I’ll accomplish the concrete goals, fail at the high-level goal, and then not notice that all my effort isn’t accomplishing the outcome I actually care about.
I’m calling the misguided concrete goals as “fabricated goals”. Because I’m falsely believing this goal is an option to get me to my high-level goal.
The alternative feels pretty bad though. If I can’t break the vague goal into concrete steps that I know how to do, I need to be constantly feeling my way through uncertainty. In that situation, sometimes it’s good to pick a small, time-bound concrete goal and do it to see if it helps. But I need to be constantly checking in on whether it’s actually helping.
I’ve been practicing a lot this year with improving feedback loops, and it’s come a long way. Sitting with uncertainty for days and checking in on whether I’m making progress on the scale of minutes, though, that’s hard. I’ve heard early stages of research being called “wandering in the desert” and this feels similar.
It’s so much easier to substitute a fabricated goal – I know what I need to do, I can measure my progress. It’s harder to sit with the uncertainty and hopefully, slowly feel my way toward some insight.
Note, I consider this post to be “Lynette speculates based on one possible model”, rather than “scientific evidence shows”, based on my default skepticism for psych research.
A recent Astral Codex Ten post argued that advice is written by people who struggle because they put tons of time into understanding the issue. People who succeeded effortlessly don’t have explicit models of how they perform (section II). It’s not the first time I’ve seen this argument, e.g. this Putanumonit post arguing that explicit rules help poor performers, who then abandon the rules and just act intuitively once they become good.
This reminded me of a body of psych research I half-remembered from college called Choking under Pressure.
My memory was that if you think about what you’re doing too much after becoming good, then you do worse. The paper I remembered from college was from 1986, so I found “Choking interventions in sports: A systematic review” from 2017.
It turns out that I was remembering the “self-focused” branch of choking research.
“Self-focus approaches have largely been extended from Baumeister’s (1984) automatic execution hypothesis. Baumeister explains that choking occurs because, when anxiety increases, the athlete allocates conscious attention to movement execution. This conscious attention interferes with otherwise automatic nature of movement execution, which results in performance decrements.”
(Slightly worrying. I have no particular reason to doubt this body of work, but Baumeister’s “willpower as muscle”—i.e. ego depletion—work hasn’t stood upwell.)
Two studies found that distraction while training negatively impacted performance. I’m not sure if this this was supposed to acclimatize the participants to distractions while performing or reduce their self-focus while training. (I’m taking the paper’s word and not digging beyond the surface on the numbers.) Either way, I feel very little surprise that practicing while distracted was worse. Maybe we just need fatal-car-crash magnitude effects before we notice that focus is good?
Which makes it all the more surprising that seven of eight studies found that athletes performed better under pressure if they simultaneously did a second task (such as counting backwards). (The eighth study found a null result.) According to the theory, the second task helped because it distracted from self-focus on the step-by-step execution.
If this theory holds up, it seems to support paying deliberate attention to explicit rules while learning but *not* paying attention to those rules once you’re able to use them intuitively (at least for motor tasks). In other words, almost exactly what Jacob argued in the Putanumonit article.
Conclusions
I was intrigued by this argument because I’ve argued that building models is how one becomes an expert.[1] After considering it, I don’t actually think the posts above offer a counter argument to my claim.
My guess is that experts do have models of skills they developed, even if they have fewer models (because they needed to explicitly learn fewer skills). The NDM method for extracting experts’ models implies that the experts have models that can be coaxed out. Holden’s Learning By Writing post feels like an explicit model.
Another possibility is that experts forget the explicit models after switching to intuition. If they faced the challenges more than five or ten years ago, they may not remember the models that helped them then. Probably uncoincidentally, this aligns neatly with Cal Newport’s advice to seek advice from someone who recently went through the challenges you’re now facing because they will still remember relevant advice.
Additionally, the areas of expertise I care about aren’t like walking, where most people will effortlessly succeed. Expertise demands improving from where you started. Both posts and the choking under pressure literature agree that explicit models help you improve, at least for a while.
“Find the best explicit models you can and practice until you don’t need them” seems like a reasonable takeaway.
[1] Note, there’s an important distinction between building models of your field and building models of skills. It seems like the main argument mostly applies to models of skills. I doubt Scott would disagree that models of fields are valuable, given how much time he’s put into developing his model of psychopharmacology.
The choking under pressure results are all about very fast athletic tasks where smoothness is critical. Most cognitive tasks will have enough time to think about both rules and then separately about intuitions/automatic skills. So getting benefit from both is quite possible.
I apologize in advance for the lengthy and tangential reply.
Gerd Gigerenzer offers a counterpoint—expertise in orderly systems is very different from expertise in complex systems (such as sports or financial markets). In the latter heuristics and System 1 type thinking performs better, quite simply explicit modelling is too inefficient or incapable of dealing with all the differing factors.
A sporting related example he gives is that if catcher in baseball simply fixes his eyes on the ball and runs towards it, he doesn’t need to explicitly calculate the trajectory of the ball. While you could argue that indirectly the calculation is performed by the player’s proprioceptors and Vestibular system, I think that it’s certain it’s not “explicit”.
However expertise tends to be narrow, I’m thinking of that overused Niels Bohr quote about how an expert is someone who has learned the hard way every mistake possible in a narrow field. Or in the Cynefin framework that you have “Simple” “Complicated” “Complex” and “Chaotic” systems, and “Simple” sits on a cliff next to “Chaotic” in the paradigm because once the constraints are removed, the expertise or best practice that works predictably in Simple systems falls apart.
This can be exploited for competitive gain. I’m sure this all ties back to OODA loops. Double Formula One World Champion Fernando Alonso like most elite sportsmen is extremely competitive and he claims that even when he plays against professional tennis players he still needs to “kill their strength”. And to do this he operates outside of their comfort zone:
At the risk of throwing in another tangent Marvin Minsky’s idea of negative expertise—that the mind is comprised more of ‘critic circuits’ that supress certain impulses more than positive or attractive circuits—to prevent us babbling or experimenting with strategies or tactics that haven’t worked before. This is why when we think of leaving a room, we don’t consider the window, even though it is a means—we opt for the much more expeditious door.
Expertise is more about what not to do than what one should do.[1]
I think what Alonso is doing here is he’s exploiting or rather inverting Negative Expertise, these professional players have trained in a narrow band of situations—playing against other elite players—and have a intuitive bag of tricks to play against them. Alonso instead forces them to play in a way they are not trained.
I wonder if choking is just that—that there has been something in the environment which they weren’t trained for. It’s not that they are overthinking—it’s that they can’t rely on intuition because there isn’t a precedent?
Another explanation for choking I see is that it isn’t conscious at all? Maybe I’ve been too influenced by the Hollywood movie trope of the player at the championship game, somehow locking eyes with his estranged wife in the grandstand, and being so overcome by a wellspring of feelings that he screws up the play. This may be why I assume Choking is related to anxiety. And anxiety is a whole-body experience, not merely “thought”. It is somatic. It affects your endocrine system, your cardiovascular system etc. etc. It is perhaps the body driving the thoughts just as much as the thoughts driving the body?
At any rate, I think when it comes to building models firstly one needs to identify which systems one is operating in—those which there are known risks, or high uncertainty. In the latter it would seem the first priority is focusing on what the circumference is of “optimal” operation (i.e. “don’t step over that line” “avoid the impulse to...”) and then finding heuristics rather than explicit models.
Interestingly, Refutative Instruction has been very profitable for John Cleese and Antony Jay, both as comedians who made comedy from the wrong way to run a hotel or a government ministry, and as businessmen who made actual industrial training videos that showed students the wrong way to do something before showing them best practice.
While the pedagogical value might simply come from the fact it is “entertaining” I am inclined to believe that it is also effective in the same way that Minsky’s Negative Expertise theory explains how learning works. (I invite you to draw your own comparisons to the kairos of Plato’s Dialogues.)
Occasionally, I get asked for feedback on someone’s resume. I’m not really a resume-editing coach, but I can ask them what they accomplished in that roll where they’re just listing their duties. Over time, I’ve found I’m completely replaceable with this rock.
You did X for Y company? Great! Why is it impressive? Did it accomplish impressive outcomes? Was it at an impressive scale? Did it involve impressive other people or companies?
You wrote an application using Z software? Great! Why is it impressive? Did the code speed up run time by an impressive amount? Did it save an impressive amount of money? Did it enable an impressive research finding? Does it display an impressive amount of technical expertise?
You published a thing? Great! Why is it impressive? Was it published somewhere impressive? Was it cited an impressive number of times? Did impressive people say good things about it?
Fabricated goals
I’m really good at taking an abstract goal and breaking it down into concrete tasks. Most of the time, this is super useful.
But if I’m not sure what would accomplish the high level goal, sometimes the concrete goals turn out to be wrong. They don’t actually accomplish the high-level more vague goal. If I don’t notice that, I’ll at best be confused. At worse, I’ll accomplish the concrete goals, fail at the high-level goal, and then not notice that all my effort isn’t accomplishing the outcome I actually care about.
I’m calling the misguided concrete goals as “fabricated goals”. Because I’m falsely believing this goal is an option to get me to my high-level goal.
The alternative feels pretty bad though. If I can’t break the vague goal into concrete steps that I know how to do, I need to be constantly feeling my way through uncertainty. In that situation, sometimes it’s good to pick a small, time-bound concrete goal and do it to see if it helps. But I need to be constantly checking in on whether it’s actually helping.
I’ve been practicing a lot this year with improving feedback loops, and it’s come a long way. Sitting with uncertainty for days and checking in on whether I’m making progress on the scale of minutes, though, that’s hard. I’ve heard early stages of research being called “wandering in the desert” and this feels similar.
It’s so much easier to substitute a fabricated goal – I know what I need to do, I can measure my progress. It’s harder to sit with the uncertainty and hopefully, slowly feel my way toward some insight.