I think the ability to put thoughts into words is not very domain-specific. Are you an above average creative writer? Or an above average teacher (to much less advanced pupils)? If neither, maybe you’re just having problems putting ideas into words in general.
Which would be good news, because that is a much more common problem than the specific instance you’re focused on, which means there’s probably a body of advice for fixing that.
That could be part of it. I’d also say what is difficult is putting certain types of ideas into words. When people talk about scientific skepticism, for example, what exactly are they saying? Guys like Andrew Gelman or Scott Alexander (or plenty of other smart folks) are able to look through academic research across domains, and something sticks out to them as wrong. They can then go through and try to identify what claims, or assumptions, or statistics, are misguided. But prior to that there is this hunch or indicator that the author’s scientific claim is off. They seem to develop this hunch by having a well developed heuristic of what is too complex to casually know, and what isn’t.
You see the same thing in economic forecasting, or medicine, where over a long time skilled people start to develop a hunch for when something is off. I think part of this hunch is knowing what is knowable, and what is beyond the pale.
As a slightly contrived example, growing up I was more of a naive rationalist. In my late teen years I learned that almost everything I was taught about drugs was a lie. Not just scheduled drugs, but nootropics as well. My dad is a physician and told me without knowing any of the research, and having read less than me, that it’s generally a bad idea to take drugs you don’t need. He had no real argument, it was something he’d picked up over years of practicing medicine: Take as few drugs and as few treatments as possible, unless necessary.
Even though there is tons of research on medicine, it’s hard to codify and explain the way certain clever established practitioners evaluate when we can rely on our inputs to lead to our desired outputs. These hunches are nonlinear and chaotic, which makes measuring them formally incredibly challenging. I’m probably bringing up medicine as an example due to the recent posts on depression networks from Slate Star Codex, where we also see Scott is getting an intuition or understanding of how these complex systems interact, and when and why the related research or classifications is misguided.
I think the ability to put thoughts into words is not very domain-specific. Are you an above average creative writer? Or an above average teacher (to much less advanced pupils)? If neither, maybe you’re just having problems putting ideas into words in general.
Which would be good news, because that is a much more common problem than the specific instance you’re focused on, which means there’s probably a body of advice for fixing that.
That could be part of it. I’d also say what is difficult is putting certain types of ideas into words. When people talk about scientific skepticism, for example, what exactly are they saying? Guys like Andrew Gelman or Scott Alexander (or plenty of other smart folks) are able to look through academic research across domains, and something sticks out to them as wrong. They can then go through and try to identify what claims, or assumptions, or statistics, are misguided. But prior to that there is this hunch or indicator that the author’s scientific claim is off. They seem to develop this hunch by having a well developed heuristic of what is too complex to casually know, and what isn’t.
You see the same thing in economic forecasting, or medicine, where over a long time skilled people start to develop a hunch for when something is off. I think part of this hunch is knowing what is knowable, and what is beyond the pale.
As a slightly contrived example, growing up I was more of a naive rationalist. In my late teen years I learned that almost everything I was taught about drugs was a lie. Not just scheduled drugs, but nootropics as well. My dad is a physician and told me without knowing any of the research, and having read less than me, that it’s generally a bad idea to take drugs you don’t need. He had no real argument, it was something he’d picked up over years of practicing medicine: Take as few drugs and as few treatments as possible, unless necessary.
Even though there is tons of research on medicine, it’s hard to codify and explain the way certain clever established practitioners evaluate when we can rely on our inputs to lead to our desired outputs. These hunches are nonlinear and chaotic, which makes measuring them formally incredibly challenging. I’m probably bringing up medicine as an example due to the recent posts on depression networks from Slate Star Codex, where we also see Scott is getting an intuition or understanding of how these complex systems interact, and when and why the related research or classifications is misguided.