Specialization is typically framed as “narrowness of skill,” and I think lsusr and others are pointing out that this is the wrong frame by which to understand it. Instead, I think they’re pointing out that “narrowness of problem” is a better way to define specialization. A wide range of skills may easily be necessary to tackle the most valuable problems.
lsusr is also pointing out in the comments that we may not be able to figure out what problems exist until we’ve gained some relevant skills.
My sense is that it’s the arbitrariness of combining skills at random that is bothering a lot of commenters, here and elsewhere. I think that’s important. I also think there’s a synthesis.
It seems to me that navigating toward interesting problems involves reasoning under uncertainty.
“If I spend another 3 months improving my skills, should I learn something new (say, immunology) or something I already know how to do (say, machine learning)?”
People are making a prediction partly on whether learning immunology or more ML will be useful for the problem they’re already focused on. But they’re also predicting which investment is more likely to expose them to a new and more valuable problem to solve.
lsusr’s arguing that people tend to focus on the problems they already know about. Having 3 months of ML under your belt makes you aware of the problems you could solve with 6 months of ML experience. That may incline you to invest in another 3 months of ML learning. But if you considered all the unknown unknowns you’d be exposed to if instead you spent those 3 months focused on immunology, you might find that broadening your skill base tended to result in counterfactually more valuable exposure to problems and opportunity to create value for others.
So we don’t have to imagine that the new skills are chosen arbitrarily. That’s sort of what the post implies, but I don’t think it’s core to the argument. Instead, the new skills can be chosen based on what seems like it offers the most potential for cross-fertilization. Even so, you’re still making a choice between acquiring that novel skill, or improving a skill you already possess.
If the choice between investing equal amounts of time into a new vs. old skill seems intuitively to offer about equal value, choose to acquire the new skill. Most of the value in improving the old skill is apparent when you’re making the decision. Most of the value of learning the new skill will become obvious only after you’ve learned it. So there’s a systematic bias against broadening your skillset, and it takes a rule like this to overcome that bias.
Well-put! Your comment was valuable because I tend to think / read about problems in a particular field that were able to be solved in a unique / unorthodox fashion due to skills acquired in another field (discovering new solutions), but your point about discovering new problems that only can be realized by having expertise in multiple fields is something to think about as well.
Specialization is typically framed as “narrowness of skill,” and I think lsusr and others are pointing out that this is the wrong frame by which to understand it. Instead, I think they’re pointing out that “narrowness of problem” is a better way to define specialization. A wide range of skills may easily be necessary to tackle the most valuable problems.
lsusr is also pointing out in the comments that we may not be able to figure out what problems exist until we’ve gained some relevant skills.
My sense is that it’s the arbitrariness of combining skills at random that is bothering a lot of commenters, here and elsewhere. I think that’s important. I also think there’s a synthesis.
It seems to me that navigating toward interesting problems involves reasoning under uncertainty.
“If I spend another 3 months improving my skills, should I learn something new (say, immunology) or something I already know how to do (say, machine learning)?”
People are making a prediction partly on whether learning immunology or more ML will be useful for the problem they’re already focused on. But they’re also predicting which investment is more likely to expose them to a new and more valuable problem to solve.
lsusr’s arguing that people tend to focus on the problems they already know about. Having 3 months of ML under your belt makes you aware of the problems you could solve with 6 months of ML experience. That may incline you to invest in another 3 months of ML learning. But if you considered all the unknown unknowns you’d be exposed to if instead you spent those 3 months focused on immunology, you might find that broadening your skill base tended to result in counterfactually more valuable exposure to problems and opportunity to create value for others.
So we don’t have to imagine that the new skills are chosen arbitrarily. That’s sort of what the post implies, but I don’t think it’s core to the argument. Instead, the new skills can be chosen based on what seems like it offers the most potential for cross-fertilization. Even so, you’re still making a choice between acquiring that novel skill, or improving a skill you already possess.
If the choice between investing equal amounts of time into a new vs. old skill seems intuitively to offer about equal value, choose to acquire the new skill. Most of the value in improving the old skill is apparent when you’re making the decision. Most of the value of learning the new skill will become obvious only after you’ve learned it. So there’s a systematic bias against broadening your skillset, and it takes a rule like this to overcome that bias.
Well-put! Your comment was valuable because I tend to think / read about problems in a particular field that were able to be solved in a unique / unorthodox fashion due to skills acquired in another field (discovering new solutions), but your point about discovering new problems that only can be realized by having expertise in multiple fields is something to think about as well.
Some of johnswentworth’s posts and sequences touch on something related, I think: Specializing in Problems We Don’t Understand, the Framing Practicum sequence, and the Gears Which Turn The World sequence.