I think a core part of this is understanding that there are trade-offs between “sensitivity” and “specificity”, and different search spaces vary greatly in what trade-off is appropriate for it.
I distinguish two different reading modes: sometimes I read to judge whether it’s safe to defer to the author about stuff I can’t verify, other times I’m just fishing for patterns that are useful to my work.
The former mode is necessary when I read about medicine. I can’t tell the difference between a brilliant insight and a lethal mistake, so it really matters to me to figure out whether the author is competent.
The latter mode is more appropriate for when I’m trying to get a gears-level understanding of something, and upside of novel ideas is much greater than the downside of bad ideas. Even if a bad idea gets through my filter, that’s going to be very useful data when I later learn why it was wrong. The heuristic here should be “rule thinkers in, not out”, or “sensitivity over specificity”.
Unfortunately, our research environment is set up in such a way that people are punished more for making mistakes than they are for novel contributions. Readers typically have the mindset of declaring an entire person useless based on the first mistake they find. It makes researchers risk-averse, and I end up seeing fewer usefwl patterns.
But, consider, if you’re reading something purely to enhance your own repertoire of useful gears, you shouldn’t even necessarily be trying to find out what the author believes. If you notice yourself internally agreeing or disagreeing, you’re already missing the point. What they believe is tangential to how the patterns behave in your own models, and all that matters is finding patterns that work. Steelmanning should be the default, not because it helps you understand what others think, but because it’s obviously what you’d do to improve your models.
I think a core part of this is understanding that there are trade-offs between “sensitivity” and “specificity”, and different search spaces vary greatly in what trade-off is appropriate for it.
I distinguish two different reading modes: sometimes I read to judge whether it’s safe to defer to the author about stuff I can’t verify, other times I’m just fishing for patterns that are useful to my work.
The former mode is necessary when I read about medicine. I can’t tell the difference between a brilliant insight and a lethal mistake, so it really matters to me to figure out whether the author is competent.
The latter mode is more appropriate for when I’m trying to get a gears-level understanding of something, and upside of novel ideas is much greater than the downside of bad ideas. Even if a bad idea gets through my filter, that’s going to be very useful data when I later learn why it was wrong. The heuristic here should be “rule thinkers in, not out”, or “sensitivity over specificity”.
Unfortunately, our research environment is set up in such a way that people are punished more for making mistakes than they are for novel contributions. Readers typically have the mindset of declaring an entire person useless based on the first mistake they find. It makes researchers risk-averse, and I end up seeing fewer usefwl patterns.
But, consider, if you’re reading something purely to enhance your own repertoire of useful gears, you shouldn’t even necessarily be trying to find out what the author believes. If you notice yourself internally agreeing or disagreeing, you’re already missing the point. What they believe is tangential to how the patterns behave in your own models, and all that matters is finding patterns that work. Steelmanning should be the default, not because it helps you understand what others think, but because it’s obviously what you’d do to improve your models.