Makes sense! Becoming more rational is a continual journey, and there’s no need to feel ashamed that you’re still learning. I expect you’ll find the process faster and smoother if you approach it as though you’re collaborating with other posters, instead of trying to score points :)
lynettebye
My career exploration: Tools for building confidence
Raemon’s Deliberate (“Purposeful?”) Practice Club
Occasionally, I get asked for feedback on someone’s resume. I’m not really a resume-editing coach, but I can ask them what they accomplished in that roll where they’re just listing their duties. Over time, I’ve found I’m completely replaceable with this rock.
You did X for Y company? Great! Why is it impressive? Did it accomplish impressive outcomes? Was it at an impressive scale? Did it involve impressive other people or companies?
You wrote an application using Z software? Great! Why is it impressive? Did the code speed up run time by an impressive amount? Did it save an impressive amount of money? Did it enable an impressive research finding? Does it display an impressive amount of technical expertise?
You published a thing? Great! Why is it impressive? Was it published somewhere impressive? Was it cited an impressive number of times? Did impressive people say good things about it?
Fabricated goals
I’m really good at taking an abstract goal and breaking it down into concrete tasks. Most of the time, this is super useful.
But if I’m not sure what would accomplish the high level goal, sometimes the concrete goals turn out to be wrong. They don’t actually accomplish the high-level more vague goal. If I don’t notice that, I’ll at best be confused. At worse, I’ll accomplish the concrete goals, fail at the high-level goal, and then not notice that all my effort isn’t accomplishing the outcome I actually care about.
I’m calling the misguided concrete goals as “fabricated goals”. Because I’m falsely believing this goal is an option to get me to my high-level goal.
The alternative feels pretty bad though. If I can’t break the vague goal into concrete steps that I know how to do, I need to be constantly feeling my way through uncertainty. In that situation, sometimes it’s good to pick a small, time-bound concrete goal and do it to see if it helps. But I need to be constantly checking in on whether it’s actually helping.
I’ve been practicing a lot this year with improving feedback loops, and it’s come a long way. Sitting with uncertainty for days and checking in on whether I’m making progress on the scale of minutes, though, that’s hard. I’ve heard early stages of research being called “wandering in the desert” and this feels similar.
It’s so much easier to substitute a fabricated goal – I know what I need to do, I can measure my progress. It’s harder to sit with the uncertainty and hopefully, slowly feel my way toward some insight.
lynettebye’s Shortform
I actually have a heart condition that severely limits my ability to exercise. Walking three miles is beyond what I’m capable of on an average day, let alone jogging anywhere.
This is an surprisingly harsh critique of a minor detail. In the future, I would strongly recommend a more polite, truth-seeking inquiry.
Hmm, it would probably work well to write a longer daily FB post, like if I set a goal to publish at least 500 words each day.
Part of the goal is ‘become comfortable submitting things I’m not fully happy with’ and part is ‘actually produce words faster’. The second part feels like it needs the length requirement. I’ve done daily short FB posts before and found it useful, but I noticed that I tended to write mostly short posts that didn’t require me to hammer out words.
DIY Deliberate Practice
Hmm, I’m not certain where you’re getting that. I interpreted this as the amount of deliberate practice contributed to success in some fields much more than it did in other fields. (Which could be explained by some fields not having developed techniques and training methods that enable good DP, or could be explained by everyone maxing out practice, or by practice not mattering in those fields.) DP still makes a difference among top performers in music and chess, indicating that not all top performers are maxing out deliberate practice in those areas.
I considered that early on during my exploration, but didn’t go deep into it after seeing Scott’s comment on his post saying:
These comparisons held positions (specialist vs. generalist) constant. Aside from whether someone is a specialist or not, I don’t think there’s any tendency for older doctors to get harder cases.
Now, after seeing that the other fields also match the same pattern of decline, I’d be somewhat surprised by evidence that taking on harder cases explained the majority of skill plateaus in middle age for doctors.
Leveling Up Or Leveling Off? Understanding The Science Behind Skill Plateaus
Note: I was treating the 2009 study as a psudo-replication. It’s not a replication, but it’s a later study on the same topic that found the same conclusion, which had allayed some of my concerns about old psychology research. However, I since looked deeper into Dan Ariely’s work, and the number of accusations of fraud or academic misconduct makes me less confident in the study. https://en.m.wikipedia.org/wiki/Dan_Ariely#Accusations_of_data_fraud_and_academic_misconduct
I agree with the line of reasoning, but I’d probably err on the side of adding a deadline even for designing your office - if you want to make sure the task gets done at some point, setting the deadline a month away seems better than not having one at all.
I agree that adopting high variance strategies makes sense if you think you’re going to fail, but I’m not sure the candle task has high variance strategies to adopt? It’s a pretty simple task.
Do Deadlines Make Us Less Creative?
I feel like being the code master for Codenames is a good exercise for understanding this concept.
I wasn’t thinking of shards as reward prediction errors, but I can see how the language was confusing. What I meant is that when multiple shards are activated, they affect behavior according to how strongly and reliably they were reinforced in the past. Practically, this looks like competing predictions of reward (because past experience is strongly correlated with predictions of future experience), although technically it’s not a prediction—the shard is just based on the past experience and will influence behavior similarly even if you rationally know the context has changed. E.g. the cake shard will probably still reinforce eating cake even if you know that you just had mouth-changing surgery that means you don’t like cake anymore.
(However, I would expect that shards evolve over time. So in the this example, after enough repetitions reliably failing to reinforce cake eating, the cake shard would eventually stop making you crave cake when you see cake.)
So in my example, cleaner language might be: For example, I more reliably ate cake in the past if someone was currently offering me the slice of cake, compared to someone promising that they will bring a slightly better cake to the office party tomorrow. So when the “someone is currently offering me something” shard and the “someone is promising me something” shard are both activated, the first shard affects my decisions more, because it was rewarded more reliably in the past.
(One test of this theory might be whether people are more likely to take the bigger, later payout if they grew up in extremely reliable environments where they could always count on the adults to follow through on promises. In that case, their “someone is promising me something” shard should have been reinforced similarly to the “someone is currently offering me something” shard. This is basically one explanation given for the classic Marshmallow Experiment—kids waited if they trusted adults to follow through with the promised two marshmallows; kids ate the marshmallow immediately if they didn’t trust adults.)
Cool, I’m happy if you’re relaxing with a leisure activity you enjoy! The people I spoke with were explicitly not doing this for fun.
Note, I consider this post to be “Lynette speculates based on one possible model”, rather than “scientific evidence shows”, based on my default skepticism for psych research.
A recent Astral Codex Ten post argued that advice is written by people who struggle because they put tons of time into understanding the issue. People who succeeded effortlessly don’t have explicit models of how they perform (section II). It’s not the first time I’ve seen this argument, e.g. this Putanumonit post arguing that explicit rules help poor performers, who then abandon the rules and just act intuitively once they become good.
This reminded me of a body of psych research I half-remembered from college called Choking under Pressure.
My memory was that if you think about what you’re doing too much after becoming good, then you do worse. The paper I remembered from college was from 1986, so I found “Choking interventions in sports: A systematic review” from 2017.
It turns out that I was remembering the “self-focused” branch of choking research.
“Self-focus approaches have largely been extended from Baumeister’s (1984) automatic execution hypothesis. Baumeister explains that choking occurs because, when anxiety increases, the athlete allocates conscious attention to movement execution. This conscious attention interferes with otherwise automatic nature of movement execution, which results in performance decrements.”
(Slightly worrying. I have no particular reason to doubt this body of work, but Baumeister’s “willpower as muscle”—i.e. ego depletion—work hasn’t stood upwell.)
Two studies found that distraction while training negatively impacted performance. I’m not sure if this this was supposed to acclimatize the participants to distractions while performing or reduce their self-focus while training. (I’m taking the paper’s word and not digging beyond the surface on the numbers.) Either way, I feel very little surprise that practicing while distracted was worse. Maybe we just need fatal-car-crash magnitude effects before we notice that focus is good?
Which makes it all the more surprising that seven of eight studies found that athletes performed better under pressure if they simultaneously did a second task (such as counting backwards). (The eighth study found a null result.) According to the theory, the second task helped because it distracted from self-focus on the step-by-step execution.
If this theory holds up, it seems to support paying deliberate attention to explicit rules while learning but *not* paying attention to those rules once you’re able to use them intuitively (at least for motor tasks). In other words, almost exactly what Jacob argued in the Putanumonit article.
Conclusions
I was intrigued by this argument because I’ve argued that building models is how one becomes an expert.[1] After considering it, I don’t actually think the posts above offer a counter argument to my claim.
My guess is that experts do have models of skills they developed, even if they have fewer models (because they needed to explicitly learn fewer skills). The NDM method for extracting experts’ models implies that the experts have models that can be coaxed out. Holden’s Learning By Writing post feels like an explicit model.
Another possibility is that experts forget the explicit models after switching to intuition. If they faced the challenges more than five or ten years ago, they may not remember the models that helped them then. Probably uncoincidentally, this aligns neatly with Cal Newport’s advice to seek advice from someone who recently went through the challenges you’re now facing because they will still remember relevant advice.
Additionally, the areas of expertise I care about aren’t like walking, where most people will effortlessly succeed. Expertise demands improving from where you started. Both posts and the choking under pressure literature agree that explicit models help you improve, at least for a while.
“Find the best explicit models you can and practice until you don’t need them” seems like a reasonable takeaway.
[1] Note, there’s an important distinction between building models of your field and building models of skills. It seems like the main argument mostly applies to models of skills. I doubt Scott would disagree that models of fields are valuable, given how much time he’s put into developing his model of psychopharmacology.