First, generally, it’s a mistake for terminology-dilution reasons: if you significantly broaden the scope of a term, you obscure differences between the concept or thing the term originally referred to, and other (variously similar) things; and you integrate assumptions about proper categories, and about similarities between their (alleged) members, into your thinking and writing, without justifying those assumptions or even making them explicit. This degrades the quality of your thought and your communication.
Second, specifically, it’s a mistake because science (i.e., academic or quasi-academic [e.g., corporate] research) differs from other domains (such as those discussed in my examples) in several important ways:
In science, if you’re trained in a field, then there’s no particular reason (other than—in principle, contingent and changeable—practical limitations such as funding) why you can’t work on just about any problem in that field. This is not the case in many other domains.
In science, there is generally no urgency to any particular problems or research areas; if everyone in the field works on one (ostensibly important) problem, but neglects another (ostensibly less important) problem, well, so what? It’ll keep. But in most other domains, if everyone works on one thing and neglects everything else, that’s bad, because all that “everything else” is often necessary; someone has got to keep doing it, even if one particular other thing is, in some sense, “more important”.
In science, you’re (generally) already doing inquiry; the fruit of your work is knowledge, understanding, etc. So it makes sense to ask what the “most important” problem is: presumably, it’s the problem (of those problems which we can currently define in a meaningful way) that, if solved, would yield the most knowledge, the greatest understanding, the most potential for further advancement, etc. But in other fields, where the goals of your efforts are not knowledge but something more concrete, it’s not clear that “most important” has a meaning, because for any goal we identify as “important”, there are always “convergent instrumental goals” as far as the eye can see, explore/exploit tradeoffs, incommensurable values, “goals” which are essentially homeostatic or otherwise have dynamics as their referents, etc., etc.
So while I can see the value of the Hamming question in science (modulo the response linked in my other comment), I should very much like to see an explicit defense and elaboration of applying the concept of the “Hamming question” to other domains.
I actually do basically agree with your first point. I made this stub because this is a concept frequently tossed around that I wanted people to be able to look up on LW easily… rather than because the jargon is optimal-according-to-me. In the most recent CFAR handbook the question is phrased:
“At any given time in our lives, it’s possible (though not always easy!) to answer the question, “What is the most important problem here, and what are the things that are keeping me from working on it?” We refer to this as “asking the Hamming question,” as a nod to mathematician Richard Hamming.”
And I think this is fairly importantly a different question than the one Hamming was asking. Moreover, the rationality community will actually need the original Hamming Question from time to time, referring specifically to scientific fields that you have extensive training. (Or, at least, if we didn’t need the Actual Science Hamming Question that’d be quite a bad sign). So yeah I think the terminology dilution is pretty important.
Moreover, the rationality community will actually need the original Hamming Question from time to time, referring specifically to scientific fields that you have extensive training. (Or, at least, if we didn’t need the Actual Science Hamming Question that’d be quite a bad sign).
It happens pretty frequently in the x-risk community, and I think the non-x-risk EA community although I don’t keep as close tabs on it.
(I think the question is asked both in terms of literal research done, and infrastructure-that-needs-building. The infrastructure is a bit different from research that Hamming was pointed at, but I think fits more closely within Hamming’s original paradigm than the personal development CFAR framing. I think it is fair to generalize the Hamming Question to “in my field of expertise, where I can reasonably expect myself to have a deep understanding of the situation, what are the most important things that need doing, and should I be working on them?”)
(My estimate is that there is something on the order of 10-50 people asking the question in the literal research sense in EA space. That estimate is based on a few people literally saying “I asked myself what the most important problems were and how I could work on them”, and some reading between the lines of how other people seem to be approaching problems and talking about them)
I have seen the “Hamming question” concept applied to domains other than science (example 1, example 2, example 3, example 4, example 5, example 6, example 7).
I think that’s a mistake.
First, generally, it’s a mistake for terminology-dilution reasons: if you significantly broaden the scope of a term, you obscure differences between the concept or thing the term originally referred to, and other (variously similar) things; and you integrate assumptions about proper categories, and about similarities between their (alleged) members, into your thinking and writing, without justifying those assumptions or even making them explicit. This degrades the quality of your thought and your communication.
Second, specifically, it’s a mistake because science (i.e., academic or quasi-academic [e.g., corporate] research) differs from other domains (such as those discussed in my examples) in several important ways:
In science, if you’re trained in a field, then there’s no particular reason (other than—in principle, contingent and changeable—practical limitations such as funding) why you can’t work on just about any problem in that field. This is not the case in many other domains.
In science, there is generally no urgency to any particular problems or research areas; if everyone in the field works on one (ostensibly important) problem, but neglects another (ostensibly less important) problem, well, so what? It’ll keep. But in most other domains, if everyone works on one thing and neglects everything else, that’s bad, because all that “everything else” is often necessary; someone has got to keep doing it, even if one particular other thing is, in some sense, “more important”.
In science, you’re (generally) already doing inquiry; the fruit of your work is knowledge, understanding, etc. So it makes sense to ask what the “most important” problem is: presumably, it’s the problem (of those problems which we can currently define in a meaningful way) that, if solved, would yield the most knowledge, the greatest understanding, the most potential for further advancement, etc. But in other fields, where the goals of your efforts are not knowledge but something more concrete, it’s not clear that “most important” has a meaning, because for any goal we identify as “important”, there are always “convergent instrumental goals” as far as the eye can see, explore/exploit tradeoffs, incommensurable values, “goals” which are essentially homeostatic or otherwise have dynamics as their referents, etc., etc.
So while I can see the value of the Hamming question in science (modulo the response linked in my other comment), I should very much like to see an explicit defense and elaboration of applying the concept of the “Hamming question” to other domains.
I actually do basically agree with your first point. I made this stub because this is a concept frequently tossed around that I wanted people to be able to look up on LW easily… rather than because the jargon is optimal-according-to-me. In the most recent CFAR handbook the question is phrased:
And I think this is fairly importantly a different question than the one Hamming was asking. Moreover, the rationality community will actually need the original Hamming Question from time to time, referring specifically to scientific fields that you have extensive training. (Or, at least, if we didn’t need the Actual Science Hamming Question that’d be quite a bad sign). So yeah I think the terminology dilution is pretty important.
This seems plausible. Has this happened so far?
It happens pretty frequently in the x-risk community, and I think the non-x-risk EA community although I don’t keep as close tabs on it.
(I think the question is asked both in terms of literal research done, and infrastructure-that-needs-building. The infrastructure is a bit different from research that Hamming was pointed at, but I think fits more closely within Hamming’s original paradigm than the personal development CFAR framing. I think it is fair to generalize the Hamming Question to “in my field of expertise, where I can reasonably expect myself to have a deep understanding of the situation, what are the most important things that need doing, and should I be working on them?”)
(My estimate is that there is something on the order of 10-50 people asking the question in the literal research sense in EA space. That estimate is based on a few people literally saying “I asked myself what the most important problems were and how I could work on them”, and some reading between the lines of how other people seem to be approaching problems and talking about them)