I really appreciate this essay. I also think that most of it consists of sazens. When I read your essay, I find my mind bubbling up concrete examples of experiences I’ve had, that confirm or contradict your claims. This is, of course, what I believe is expected from graduate students when they are studying theoretical computer science or mathematics courses—they’d encounter an abstraction, and it is on them to build concrete examples in their mind to get a sense of what the paper or textbook is talking about.
However, when it comes to more inchoate domains like research skill, such writing does very little to help the inexperienced researcher. It is more likely that they’d simply miss out on the point you are trying to tell them, for they haven’t failed both by, say, being too trusting (a common phenomenon) and being too wary of ‘trusting’ (a somewhat rare phenomenon for someone who gets to the big leagues as a researcher). What would actually help is either concrete case studies, or a tight feedback loop that involves a researcher trying to do something, and perhaps failing, and getting specific feedback from an experienced researcher mentoring them. The latter has an advantage that one doesn’t need to explicitly try to elicit and make clear distinctions of the skills involved, and can still learn them. The former is useful because it is scalable (you write it once, and many people can read it), and the concreteness is extremely relevant to allowing people to evaluate the abstract claims you make, and pattern match it to their own past, current, or potential future experiences.
For example, when reading the Inquiring and Trust section, I recall an experience I had last year where I couldn’t work with a team of researchers, because I had basically zero ability to defer (and even now as I write this, I find the notion of deferring somewhat distasteful). On the other hand, I don’t think there’s a real trade-off here. I don’t expect that anyone needs to naively trust that other people they are coordinating with will have their back. I’d probably accept the limits to coordination, and recalibrate my expectations of the usefulness of the research project, and probably continue if the expected value of working on the project until it is shipped is worth it (which in general it is).
When reading the Lightness and Diligence section, I was reminded of the Choudhuri 1985 paper, which describes the author’s notion of a practice of “partial science”, that is, an inability to push science forward due to certain systematic misconceptions of how basic (theoretical physics, in this context) science occurs. One misconception involves a sort of distaste around working on ‘unimportant’ problems, or problems that don’t seem fundamental, while only caring about or willing to put in effort to solve ‘fundamental’ problems. The author doesn’t make it explicit, but I believe that he believed that the incremental work that scientists do is almost essential for building their knowledge and skill to make their way forwards towards attacking these supposedly fundamental problems, and the aversion to working on supposedly incremental research problems leads people to being stuck. This seems very similar to the thing you are pointing at when you talk about diligence and hard work being extremely important. The incremental research progress, to me, seems similar to what you call ‘cataloguing rocks’. You need data to see a pattern, after all.
This is the sort of realization and thinking I wouldn’t have if I did not have research experience or did not read relevant case studies. I expect that Mesa of early 2023 would have mostly skimmed and ignored your essay, simply because he’d scoff at the notion of ‘Trust’ and ‘Lightness’ being relevant in any way to research work.
However, when it comes to more inchoate domains like research skill, such writing does very little to help the inexperienced researcher. It is more likely that they’d simply miss out on the point you are trying to tell them, for they haven’t failed both by, say, being too trusting (a common phenomenon) and being too wary of ‘trusting’ (a somewhat rare phenomenon for someone who gets to the big leagues as a researcher). What would actually help is either concrete case studies, or a tight feedback loop that involves a researcher trying to do something, and perhaps failing, and getting specific feedback from an experienced researcher mentoring them. The latter has an advantage that one doesn’t need to explicitly try to elicit and make clear distinctions of the skills involved, and can still learn them. The former is useful because it is scalable (you write it once, and many people can read it), and the concreteness is extremely relevant to allowing people to evaluate the abstract claims you make, and pattern match it to their own past, current, or potential future experiences.
I wholeheartedly agree.
The reason why I didn’t go for this more grounded and practical and teachable approach is that at the moment, I’m optimizing for consistently writing and publishing posts.
Historically the way I fail at that is by trying too hard to write really good posts and make all the arguments super clean and concrete and detailed—this leads to me dropping the piece after like a week of attempts.
So instead, I’m going for “write what comes naturally, edit a bit to check typos and general coherence, and publish”, which leads to much more abstract pieces (because that’s how I naturally think).
But reexploring this topic in an in-depth and detailed piece in the future, along the lines of what you describe, feels like an interesting challenge. Will keep it in mind. Thanks for the thoughtful comment!
I really appreciate this essay. I also think that most of it consists of sazens. When I read your essay, I find my mind bubbling up concrete examples of experiences I’ve had, that confirm or contradict your claims. This is, of course, what I believe is expected from graduate students when they are studying theoretical computer science or mathematics courses—they’d encounter an abstraction, and it is on them to build concrete examples in their mind to get a sense of what the paper or textbook is talking about.
However, when it comes to more inchoate domains like research skill, such writing does very little to help the inexperienced researcher. It is more likely that they’d simply miss out on the point you are trying to tell them, for they haven’t failed both by, say, being too trusting (a common phenomenon) and being too wary of ‘trusting’ (a somewhat rare phenomenon for someone who gets to the big leagues as a researcher). What would actually help is either concrete case studies, or a tight feedback loop that involves a researcher trying to do something, and perhaps failing, and getting specific feedback from an experienced researcher mentoring them. The latter has an advantage that one doesn’t need to explicitly try to elicit and make clear distinctions of the skills involved, and can still learn them. The former is useful because it is scalable (you write it once, and many people can read it), and the concreteness is extremely relevant to allowing people to evaluate the abstract claims you make, and pattern match it to their own past, current, or potential future experiences.
For example, when reading the Inquiring and Trust section, I recall an experience I had last year where I couldn’t work with a team of researchers, because I had basically zero ability to defer (and even now as I write this, I find the notion of deferring somewhat distasteful). On the other hand, I don’t think there’s a real trade-off here. I don’t expect that anyone needs to naively trust that other people they are coordinating with will have their back. I’d probably accept the limits to coordination, and recalibrate my expectations of the usefulness of the research project, and probably continue if the expected value of working on the project until it is shipped is worth it (which in general it is).
When reading the Lightness and Diligence section, I was reminded of the Choudhuri 1985 paper, which describes the author’s notion of a practice of “partial science”, that is, an inability to push science forward due to certain systematic misconceptions of how basic (theoretical physics, in this context) science occurs. One misconception involves a sort of distaste around working on ‘unimportant’ problems, or problems that don’t seem fundamental, while only caring about or willing to put in effort to solve ‘fundamental’ problems. The author doesn’t make it explicit, but I believe that he believed that the incremental work that scientists do is almost essential for building their knowledge and skill to make their way forwards towards attacking these supposedly fundamental problems, and the aversion to working on supposedly incremental research problems leads people to being stuck. This seems very similar to the thing you are pointing at when you talk about diligence and hard work being extremely important. The incremental research progress, to me, seems similar to what you call ‘cataloguing rocks’. You need data to see a pattern, after all.
This is the sort of realization and thinking I wouldn’t have if I did not have research experience or did not read relevant case studies. I expect that Mesa of early 2023 would have mostly skimmed and ignored your essay, simply because he’d scoff at the notion of ‘Trust’ and ‘Lightness’ being relevant in any way to research work.
I wholeheartedly agree.
The reason why I didn’t go for this more grounded and practical and teachable approach is that at the moment, I’m optimizing for consistently writing and publishing posts.
Historically the way I fail at that is by trying too hard to write really good posts and make all the arguments super clean and concrete and detailed—this leads to me dropping the piece after like a week of attempts.
So instead, I’m going for “write what comes naturally, edit a bit to check typos and general coherence, and publish”, which leads to much more abstract pieces (because that’s how I naturally think).
But reexploring this topic in an in-depth and detailed piece in the future, along the lines of what you describe, feels like an interesting challenge. Will keep it in mind. Thanks for the thoughtful comment!
I agree with this strategy, and I plan to begin something similar soon. I forgot that Epistemological Fascinations is your less polished and more “optimized for fun and sustainability” substack. (I have both your substacks in my feed reader.)
No worries. ;)