The Hamming Question
This is a stub post, mostly existing so people can easily link to a post explaining what the Hamming question is. If you would like to write a real version of this post, ping me and I’ll arrange to give you edit rights to this stub.
For now, I am stealing the words from Jacobian’s event post:
Mathematician Richard Hamming used to ask scientists in other fields “What are the most important problems in your field?” partly so he could troll them by asking “Why aren’t you working on them?” and partly because getting asked this question is really useful for focusing people’s attention on what matters.
CFAR developed the technique of “Hamming Questions” as different prompts to get your brain to (actually) think about the biggest problems, bottlenecks, and unspoken desires in your life.
A transcript of Hamming’s extensive 1986 talk “You and your research”, touches upon a several elements of Hamming’s philosophy, and includes this anecdote about the canonical “Hamming Question”:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, “Do you mind if I join you?” They can’t say no, so I started eating with them for a while. And I started asking, “What are the important problems of your field?” And after a week or so, “What important problems are you working on?” And after some more time I came in one day and said, “If what you are doing is not important, and if you don’t think it is going to lead to something important, why are you at Bell Labs working on it?” I wasn’t welcomed after that; I had to find somebody else to eat with! That was in the spring.
In the fall, Dave McCall stopped me in the hall and said, “Hamming, that remark of yours got underneath my skin. I thought about it all summer, i.e. what were the important problems in my field. I haven’t changed my research,” he says, “but I think it was well worthwhile.” And I said, “Thank you Dave,” and went on. I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles. They were unable to ask themselves, “What are the important problems in my field?”
If you do not work on an important problem, it’s unlikely you’ll do important work. It’s perfectly obvious. Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them. Let me warn you, “important problem” must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn’t work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It’s not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don’t work on important problems, I mean it in that sense. The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn’t believe that they will lead to important problems.
Vika Krakovna wrote up a report about how CFAR applies the technique in some of their workshops:
The CFAR alumni workshop on the first weekend of May was focused on the Hamming question. Mathematician Richard Hamming was known to approach experts from other fields and ask “what are the important problems in your field, and why aren’t you working on them?”. The same question can be applied to personal life: “what are the important problems in your life, and what is stopping you from working on them?”.
Over the course of the weekend, the twelve of us asked this question of ourselves and each other, in many forms and guises: “if Vika isn’t making a major impact on the world in 5 years, what would have stopped her?”, “what are your greatest bottlenecks?”, “how can we actually try?”, etc. The intense focus on mental pain points was interspersed with naps and silly games to let off steam. On the last day, we did a group brainstorm, where everyone who wanted to receive feedback took a turn in the center of the circle, and everyone else speculated on what they thought were the biggest bottlenecks of the person in the center. By this time, we had mostly gotten to know each other, and even the impressions from those who knew me less well were surprisingly accurate. I am very grateful to everyone at the workshop for being so insightful and supportive of each other (and actually caring).
- Some (problematic) aesthetics of what constitutes good work in academia by 11 Mar 2024 17:47 UTC; 147 points) (
- Rationality Research Report: Towards 10x OODA Looping? by 24 Feb 2024 21:06 UTC; 113 points) (
- Questions That Lead to Impactful Conversations by 24 Mar 2022 17:25 UTC; 63 points) (EA Forum;
- Some (problematic) aesthetics of what constitutes good work in academia by 11 Mar 2024 17:47 UTC; 44 points) (EA Forum;
- Choosing the right dish by 19 Nov 2022 1:38 UTC; 38 points) (
- Holy Grails of Chemistry by 30 Sep 2020 2:03 UTC; 34 points) (
- Via productiva—my writing and productivity framework by 6 Mar 2022 16:06 UTC; 29 points) (
- The Conversations We Make Space For by 28 Jul 2022 21:36 UTC; 22 points) (EA Forum;
- The Conversations We Make Space For by 28 Jul 2022 21:37 UTC; 21 points) (
- Training Regime Day 16: Hamming Questions by 20 Apr 2020 14:51 UTC; 17 points) (
- How To Actually Succeed by 12 Sep 2022 22:33 UTC; 11 points) (EA Forum;
- Finding the Right Problem by 29 May 2022 17:52 UTC; 8 points) (
- How To Actually Succeed by 13 Sep 2022 0:21 UTC; 3 points) (
- 20 Aug 2022 4:43 UTC; 3 points) 's comment on Novelty Generation—The Art of Good Ideas by (
- 5 Aug 2021 8:06 UTC; 2 points) 's comment on What psychology studies are most important to the rationalist worldview? by (
By the way, I never understood why it’s supposed to be such a “trick” question. “Why aren’t you working on them?” the obvious answer is, diminishing returns. If a lot of people (or a lot of “total IQ”) already goes into problem X, then adding more to problem X might be less useful than adding more to problem Y, which is less important but also more neglected.
In the context of our community, people might interpret it as something like “why aren’t more people working on mitigating X-risk instead of studying academic questions with no known applications”, which is a good question, but it’s not the same. The key here is the meaning of “important”. For most academics, “important” means “acknowledged as important in the academia”, or at best “intrinsically interesting”. On the other hand, for EA-minded people “important” means “has actual positive influence on the world”. This difference in the meaning of “important” seems much more important than blaming people for not choosing the most important question on a scale they already accept.
In his book Thomas Khun’s The Structure of Scientific Revolutions, he makes the point that those fields like theoretical physics where scientists persue issues “acknowledged as important in the academia” as opposed to those persuing topics where the answers are of high practical use like economics, nutrition science or social science often make much more progress.
In medicine I think it’s great for EA reasons when researchers do basic work that moves our understanding of the human body forward even when that doesn’t have direct applications.
For one thing, this observation is strongly confounded by other characteristics that are different between those fields. For another, yes, I know that often something that was studied just for the love of knowledge has tremendous applications later. And yet, I feel that, if your goal is improving the world then there is room for more analysis than, “does it seem interesting to study that”. Also what I consider “practical” is not necessarily what is normally viewed as “practical”. For example I consider it “practical” to research a question because it may have some profound consequences many decades down the line, even if that’s only backed by broad considerations rather than some concrete application.
Relatedly, rereading this post was what prompted me to write this stub post:
I haven’t spoken about “love of knowledge”. Nutrition scienctists who wants to know what they should eat are also seeking knowledge that they might love. I spoke about research which advances the field.
As far as I understand the research into the ability of people who judge grants to predict how much influence a result would have decades later is very poor. Even estimating effect at the time a paper is published is very hard.
Most published paper turn out to have no practical application. Most nutritition papers try to answer practical questions for which the field currently doesn’t have the ability to provide good answers.
In Feymann’s cargo cult speech he talked about how Mr. Young did research on how to run psychology research on rats in a way that yields much better results. His research community unfortunately ignored him to the point that it’s not possible to locate his paper but if he would have been heard his research would have effects that are much larger then another random rat experiment with dubious methology.
For the field of nutrition science to progress we would likely need a lot of people like Mr. Young who think about how to actually make progress in learning about the field even if that’s in the beginning far from practical application.
What is the difference between love of knowledge and “advancing the field”? Most researchers seem to focus on questions that are some combination of (i) interesting personally to them (ii) would bring them fame and (iii) would bring them grants. It would be awfully convenient for them if that is literally the best estimate you could make of what research will ultimately be useful, but I doubt it is case. Some research that “advances the field” is actively harmful (e.g. advancing AI capabilities without advancing understanding, improving the ability to create synthetic pandemics, creating other technologies that are easy to weaponize by bad actors, creating technology that shifts economic incentives towards massive environmental damage...)
Love of knowledge can drive you to engage with questions that aren’t addressable with the current tools of a field in a way that brings the field forward.
Work that’s advancing the field is work on which other scientists can build.
In physics scientists use significance thresholds that are much more stringent then 5%. If you would tell nutrition researchers that they could only publish findings that have 5 sigma’s they would be forced to run studies that are very differently structured. Those studies would provide answers that are a lot less interesting but to the extend that researchers manage to make findings those findings would be reliable and would allow other researchers to build on them.
I’m not saying that this is the only way to move forward for nutrition science but if I think the field would need to think much better about how progress can be made then running the kind of studies that they run currently.
Safety concerns are a valid concern and increase in capability in certain fields like AI might now be desirable for it’s own sake.
I think we probably use the phrase “love of knowledge” differently. The way I see it, if you love knowledge then you must engage with questions addressable with the current tools in a way that brings the field forward, otherwise you are not gaining any knowledge, you are just wasting your time or fooling yourself and others. If certain scientists get spurious results because of poor methodology, there is no love of knowledge in it. I also don’t think they use poor methodology because of desire for knowledge at all: rather, they probably do it because of the pressure to publish and because of the osmosis of some unhealthy culture in their field.
Or, “it’s too hard”. Or, “I don’t think I am good enough”. Or plenty of other excuses that are not necessarily a good reason for not doing the thing.
The point is not to have an answer, but to ask the question and to check.
You are not smarter for having the answer, you are smarter for asking the question.
I agree with the general principle, it’s just that, my impression is that most scientists have asked themselves this question and made more or less reasonable decisions regarding it, with respect to the scale of importance prevalent in the academia. From my (moderate amount of) experience, most scientists would love to crack the biggest problem in their field if they think they have a good shot at it.
So, I’m not actually sure. I’m taking at face value that there *was* a guy who went around asking the question, and that it was fairly unusual and provoked weird enough reactions to become somewhat mythological. (Although I wouldn’t be that surprised if the mythology turned out to be be false).
But it’s not that surprising to me that many people would end up working on some random thing because it was expedient, or without having reflected much on what they should be working on at all. These seems to be the way people are by default.
The way I understand it, Hamming was a real guy doing real annoying questions in Bell labs.
That’s my understanding too, I just wouldn’t be that surprised if that story went through a few games of telephone before it reached us.
I think the first version that reached me through the rationality sphere had Hamming asking the all the questions on the same day.
A bit later there was a local rationalist who got a different version of the story through family connection and a Bell labs source. In that story Hamming asked “What’s the most important question” in week 1, “What are you working on?” in week 2 and “Why isn’t that the same?” in week 3.
(Pun intended? The former name of Bell Labs, and so on...)
Oh lol. No, unfortunately.
I have seen the “Hamming question” concept applied to domains other than science (example 1, example 2, example 3, example 4, example 5, example 6, example 7).
I think that’s a mistake.
First, generally, it’s a mistake for terminology-dilution reasons: if you significantly broaden the scope of a term, you obscure differences between the concept or thing the term originally referred to, and other (variously similar) things; and you integrate assumptions about proper categories, and about similarities between their (alleged) members, into your thinking and writing, without justifying those assumptions or even making them explicit. This degrades the quality of your thought and your communication.
Second, specifically, it’s a mistake because science (i.e., academic or quasi-academic [e.g., corporate] research) differs from other domains (such as those discussed in my examples) in several important ways:
In science, if you’re trained in a field, then there’s no particular reason (other than—in principle, contingent and changeable—practical limitations such as funding) why you can’t work on just about any problem in that field. This is not the case in many other domains.
In science, there is generally no urgency to any particular problems or research areas; if everyone in the field works on one (ostensibly important) problem, but neglects another (ostensibly less important) problem, well, so what? It’ll keep. But in most other domains, if everyone works on one thing and neglects everything else, that’s bad, because all that “everything else” is often necessary; someone has got to keep doing it, even if one particular other thing is, in some sense, “more important”.
In science, you’re (generally) already doing inquiry; the fruit of your work is knowledge, understanding, etc. So it makes sense to ask what the “most important” problem is: presumably, it’s the problem (of those problems which we can currently define in a meaningful way) that, if solved, would yield the most knowledge, the greatest understanding, the most potential for further advancement, etc. But in other fields, where the goals of your efforts are not knowledge but something more concrete, it’s not clear that “most important” has a meaning, because for any goal we identify as “important”, there are always “convergent instrumental goals” as far as the eye can see, explore/exploit tradeoffs, incommensurable values, “goals” which are essentially homeostatic or otherwise have dynamics as their referents, etc., etc.
So while I can see the value of the Hamming question in science (modulo the response linked in my other comment), I should very much like to see an explicit defense and elaboration of applying the concept of the “Hamming question” to other domains.
I actually do basically agree with your first point. I made this stub because this is a concept frequently tossed around that I wanted people to be able to look up on LW easily… rather than because the jargon is optimal-according-to-me. In the most recent CFAR handbook the question is phrased:
And I think this is fairly importantly a different question than the one Hamming was asking. Moreover, the rationality community will actually need the original Hamming Question from time to time, referring specifically to scientific fields that you have extensive training. (Or, at least, if we didn’t need the Actual Science Hamming Question that’d be quite a bad sign). So yeah I think the terminology dilution is pretty important.
This seems plausible. Has this happened so far?
It happens pretty frequently in the x-risk community, and I think the non-x-risk EA community although I don’t keep as close tabs on it.
(I think the question is asked both in terms of literal research done, and infrastructure-that-needs-building. The infrastructure is a bit different from research that Hamming was pointed at, but I think fits more closely within Hamming’s original paradigm than the personal development CFAR framing. I think it is fair to generalize the Hamming Question to “in my field of expertise, where I can reasonably expect myself to have a deep understanding of the situation, what are the most important things that need doing, and should I be working on them?”)
(My estimate is that there is something on the order of 10-50 people asking the question in the literal research sense in EA space. That estimate is based on a few people literally saying “I asked myself what the most important problems were and how I could work on them”, and some reading between the lines of how other people seem to be approaching problems and talking about them)
Based on the other comments, I feel like it is worthwhile to point out that Hamming is talking about how to be a successful scientist, as measured by things like promotions, publications, and reputation.
He is not talking about the impact of the problems themselves. From the quoted section, emphasis mine:
So it looks like we’re trying to apply the question one entire step before where Hamming did. For example there weren’t—and if I read Hamming right, still aren’t—reasonable attacks to the alignment problem. The prospective consequences are just so great we had to consider what is reasonable in a relative sense, and try anyway.
It feels like rationality largely boils down to the search for a generative rule for reasonable attacks.
This doesn’t quite feel right to me. From another section:
So this is clearly not about professional success, because he points to professional success as a thing that kills the kind of greatness he’s trying to cultivate in people.
My impression is that he was genuinely pointing at “important” meaning “things that will have an impact”, just that tractability matters as much as as importance-if-you-solve-the-problem, which is why “teleportation” isn’t a good project.
I read this section completely differently.
He points to thinking about the important problems as causing success. When people change what they are doing, then they don’t continue to have it:
Carrying on from the end of your section:
The talk is about things that cause people to do great work. When those causal factors change, the work output also changes. He goes on to cover other things which are about professional success:
Working with an open office door, to talk to your coworkers
Changing routine work into more general and important work, which is more satisfying
The importance of self-promotion
Working on presentation skills
How to recruit your boss to fight with outside agencies
How to get your boss to give you more resources
Dressing for success, and getting punished for non-conformity
Lastly, he is pretty specific about his motivations (emphasis mine):
So he is specifically talking about professional success in science. But—things like the rationality project and EA are good candidates for other fields to which the advice could be applied, especially in light of how important science is to them.
I agree that Hamming is talking about how to be a successful scientist, but I think “as measured by things like promotions, publications, and reputation” gives the wrong impression: that Hamming’s talking about how to optimize for personal success as opposed to overall impact. But the “have a reasonable attack” criterion is necessary for optimizing impact on the world, too, and I don’t think Hamming would have changed his advice if he’d been convinced that (e.g.) the way to maximize promotions, publications, and reputation is to get better at self-promotion or to falsify your results or something.
I think that personal success is the correct impression:
Notice he doesn’t talk about all the amazing things that were solved; he talks about lab positions and Nobel Prizes and getting equations named after himself.
I expect that Hamming would view having an impact on the world as being a good reason to choose going into science instead of law or finance, but once that choice is made being great at science is the reasonable thing to do.
To be clear, I don’t think he viewed reputations and promotions as the goal, I believe he viewed them as reasonable metrics that he was on the right track for doing great science.
Rereading the original text, I think he is talking about all three of (1) doing something that has a substantial impact on the world, (2) doing something that brings you major career success, and (3) doing something that turns you into a better scientist and a better person. (The last of those is mostly not very apparent in what he says, but there’s this: “I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.”)
And here’s the best response (that I’ve seen) to a “Hamming question”.
Yes, the comparative advantage answer is a compelling one, when it’s not an excuse based on motivated cognition.
Quoted from the 2016 cfar handbook:
Richard Hamming was a mathematician at Bell Labs from the 1940’s through the 1970’s who liked to sit down with strangers in the company cafeteria and ask them about their fields of expertise. At first, he would ask mainly about their day-to-day work, but eventually, he would turn the conversation toward the big, open questions—what were the most important unsolved problems in their profession? Why did those problems matter?
What kinds of things would change when someone in the field finally broke through? What new potential would that unlock? After he’d gotten them excited and talking passionately, he would ask one final question: “So, why aren’t you working on that?”
Hamming didn’t make very many friends with this strategy, but he did inspire some of his colleagues to make major shifts in focus, rededicating their careers to the problems they felt actually mattered.
Do you have more info on this? I’d be very curious to hear about some specific examples!
Yesterday I read through Hamming’s talk, “You and Your Research”, which explores his overall philosophy. This anecdote I think is most relevant (I’m probably going to edit this into the main post)
That seems like a startlingly weak anecdote (especially so given that it’s the only one we’ve seen). From this quote, it seems like Hamming—contrary to the claim Elo quoted—in fact inspired none of his colleagues to “make major shifts in focus” or to “rededicat[e] their careers to the problems they felt actually mattered”.
The one colleague who was, allegedly, inspired by Hamming’s questions in some way, explicitly said (we are told) that he did not shift his research focus! He ended up being successful… which Hamming attributes to his own influence, for… some reason. (The anecdotal evidence provided for this causal sort-of-claim is almost textbook poor; it’s literally nothing more than post hoc, ergo propter hoc…)
Do we have any solid evidence, at all, that there is any concrete, demonstrable benefit, or even consequence, to asking the “Hamming question”? Any case studies (with much more detail, and more evidential support, than the anecdote quoted above)? So far, it seems to me that the significance attached to this “Hamming question” concept has been far, far out of proportion to its verified usefulness…
Edit: Corrected wording to make it clear Elo was quoting a source.
[for clarity, we were both quoting other sources]
My opinion is that from trying the exercises several times over the course of the last few years, it’s a valuable tool to help me see what I’m ignoring or what I need to deal with.
Indeed, my apologies—I read hastily, and didn’t spot the quoting without the quotation styling. I’ve corrected the wording in the grandparent.
We encourage participants to occasionally ask “the Hamming question.”
Checking in on the match between your beliefs and your actions is a rea- sonable thing to do a few times a year. It can lead to increased motivation, positive shifts to better strategies, and a clearer sense of where your deepest priorities lie.
Sometimes the most important question has less importance (say, 20 percent of total) than the sum of less important questions (say, 8x10=80 for 8 smaller problems). For example, if everybody will work on AI safety, some smaller x-risks could be completely neglected.
The commons effect of existential risks may complicate that example. (Shorter-term existential risks make longer-term existential risks less impactful until the shorter-term ones are solved.)