By the way, I never understood why it’s supposed to be such a “trick” question. “Why aren’t you working on them?” the obvious answer is, diminishing returns. If a lot of people (or a lot of “total IQ”) already goes into problem X, then adding more to problem X might be less useful than adding more to problem Y, which is less important but also more neglected.
In the context of our community, people might interpret it as something like “why aren’t more people working on mitigating X-risk instead of studying academic questions with no known applications”, which is a good question, but it’s not the same. The key here is the meaning of “important”. For most academics, “important” means “acknowledged as important in the academia”, or at best “intrinsically interesting”. On the other hand, for EA-minded people “important” means “has actual positive influence on the world”. This difference in the meaning of “important” seems much more important than blaming people for not choosing the most important question on a scale they already accept.
In his book Thomas Khun’s The Structure of Scientific Revolutions, he makes the point that those fields like theoretical physics where scientists persue issues “acknowledged as important in the academia” as opposed to those persuing topics where the answers are of high practical use like economics, nutrition science or social science often make much more progress.
In medicine I think it’s great for EA reasons when researchers do basic work that moves our understanding of the human body forward even when that doesn’t have direct applications.
For one thing, this observation is strongly confounded by other characteristics that are different between those fields. For another, yes, I know that often something that was studied just for the love of knowledge has tremendous applications later. And yet, I feel that, if your goal is improving the world then there is room for more analysis than, “does it seem interesting to study that”. Also what I consider “practical” is not necessarily what is normally viewed as “practical”. For example I consider it “practical” to research a question because it may have some profound consequences many decades down the line, even if that’s only backed by broad considerations rather than some concrete application.
Relatedly, rereading this post was what prompted me to write this stub post:
I’m fairly concerned with the practice of telling people who “really care about AI safety” to go into AI capabilities research, unless they are very junior researchers who are using general AI research as a place to improve their skills until they’re able to contribute to AI safety later. (See Leveraging Academia).
The reason is not a fear that they will contribute to AI capabilities advancement in some manner that will be marginally detrimental to the future. It’s also not a fear that they’ll fail to change the company’s culture in the ways they’d hope, and end up feeling discouraged. What I’m afraid of is that they’ll feel pressure to start pretending to themselves, or to others, that their work is “relevant to safety”. Then what we end up with are companies and departments filled with people who are “concerned about safety”, creating a false sense of security that something relevant is being done, when all we have are a bunch of simmering concerns and concomitant rationalizations.
This fear of mine requires some context from my background as a researcher. I see this problem with environmentalists who “really care about climate change”, who tell themselves they’re “working on it” by studying the roots of a fairly arbitrary species of tree in a fairly arbitrary ecosystem that won’t generalize to anything likely to help with climate change.
My assessment that their work won’t generalize is mostly not from my own outside view; it comes from asking the researcher about how their work is likely to have an impact, and getting a response that either says nothing more than “I’m not sure, but it seems relevant somehow”, or an argument with a lot of caveats like “X might help with Y, which might help with Z, which might help with climate change, but we really can’t be sure, and it’s not my job to defend the relevance of my work. It’s intrinsically interesting to me, and you never know if something could turn out to be useful that seemed useless at first.”
At the same time, I know other climate scientists who seem to have actually done an explicit or implicit Fermi estimate for the probability that they will personally soon discover a species of bacteria that could safely scrub the Earth’s atmosphere of excess carbon. That’s much better.
I haven’t spoken about “love of knowledge”. Nutrition scienctists who wants to know what they should eat are also seeking knowledge that they might love. I spoke about research which advances the field.
As far as I understand the research into the ability of people who judge grants to predict how much influence a result would have decades later is very poor. Even estimating effect at the time a paper is published is very hard.
Most published paper turn out to have no practical application. Most nutritition papers try to answer practical questions for which the field currently doesn’t have the ability to provide good answers.
In Feymann’s cargo cult speech he talked about how Mr. Young did research on how to run psychology research on rats in a way that yields much better results. His research community unfortunately ignored him to the point that it’s not possible to locate his paper but if he would have been heard his research would have effects that are much larger then another random rat experiment with dubious methology.
For the field of nutrition science to progress we would likely need a lot of people like Mr. Young who think about how to actually make progress in learning about the field even if that’s in the beginning far from practical application.
What is the difference between love of knowledge and “advancing the field”? Most researchers seem to focus on questions that are some combination of (i) interesting personally to them (ii) would bring them fame and (iii) would bring them grants. It would be awfully convenient for them if that is literally the best estimate you could make of what research will ultimately be useful, but I doubt it is case. Some research that “advances the field” is actively harmful (e.g. advancing AI capabilities without advancing understanding, improving the ability to create synthetic pandemics, creating other technologies that are easy to weaponize by bad actors, creating technology that shifts economic incentives towards massive environmental damage...)
Love of knowledge can drive you to engage with questions that aren’t addressable with the current tools of a field in a way that brings the field forward.
Work that’s advancing the field is work on which other scientists can build.
In physics scientists use significance thresholds that are much more stringent then 5%. If you would tell nutrition researchers that they could only publish findings that have 5 sigma’s they would be forced to run studies that are very differently structured. Those studies would provide answers that are a lot less interesting but to the extend that researchers manage to make findings those findings would be reliable and would allow other researchers to build on them.
I’m not saying that this is the only way to move forward for nutrition science but if I think the field would need to think much better about how progress can be made then running the kind of studies that they run currently.
Safety concerns are a valid concern and increase in capability in certain fields like AI might now be desirable for it’s own sake.
I think we probably use the phrase “love of knowledge” differently. The way I see it, if you love knowledge then you must engage with questions addressable with the current tools in a way that brings the field forward, otherwise you are not gaining any knowledge, you are just wasting your time or fooling yourself and others. If certain scientists get spurious results because of poor methodology, there is no love of knowledge in it. I also don’t think they use poor methodology because of desire for knowledge at all: rather, they probably do it because of the pressure to publish and because of the osmosis of some unhealthy culture in their field.
I agree with the general principle, it’s just that, my impression is that most scientists have asked themselves this question and made more or less reasonable decisions regarding it, with respect to the scale of importance prevalent in the academia. From my (moderate amount of) experience, most scientists would love to crack the biggest problem in their field if they think they have a good shot at it.
So, I’m not actually sure. I’m taking at face value that there *was* a guy who went around asking the question, and that it was fairly unusual and provoked weird enough reactions to become somewhat mythological. (Although I wouldn’t be that surprised if the mythology turned out to be be false).
But it’s not that surprising to me that many people would end up working on some random thing because it was expedient, or without having reflected much on what they should be working on at all. These seems to be the way people are by default.
I think the first version that reached me through the rationality sphere had Hamming asking the all the questions on the same day.
A bit later there was a local rationalist who got a different version of the story through family connection and a Bell labs source. In that story Hamming asked “What’s the most important question” in week 1, “What are you working on?” in week 2 and “Why isn’t that the same?” in week 3.
By the way, I never understood why it’s supposed to be such a “trick” question. “Why aren’t you working on them?” the obvious answer is, diminishing returns. If a lot of people (or a lot of “total IQ”) already goes into problem X, then adding more to problem X might be less useful than adding more to problem Y, which is less important but also more neglected.
In the context of our community, people might interpret it as something like “why aren’t more people working on mitigating X-risk instead of studying academic questions with no known applications”, which is a good question, but it’s not the same. The key here is the meaning of “important”. For most academics, “important” means “acknowledged as important in the academia”, or at best “intrinsically interesting”. On the other hand, for EA-minded people “important” means “has actual positive influence on the world”. This difference in the meaning of “important” seems much more important than blaming people for not choosing the most important question on a scale they already accept.
In his book Thomas Khun’s The Structure of Scientific Revolutions, he makes the point that those fields like theoretical physics where scientists persue issues “acknowledged as important in the academia” as opposed to those persuing topics where the answers are of high practical use like economics, nutrition science or social science often make much more progress.
In medicine I think it’s great for EA reasons when researchers do basic work that moves our understanding of the human body forward even when that doesn’t have direct applications.
For one thing, this observation is strongly confounded by other characteristics that are different between those fields. For another, yes, I know that often something that was studied just for the love of knowledge has tremendous applications later. And yet, I feel that, if your goal is improving the world then there is room for more analysis than, “does it seem interesting to study that”. Also what I consider “practical” is not necessarily what is normally viewed as “practical”. For example I consider it “practical” to research a question because it may have some profound consequences many decades down the line, even if that’s only backed by broad considerations rather than some concrete application.
Relatedly, rereading this post was what prompted me to write this stub post:
I haven’t spoken about “love of knowledge”. Nutrition scienctists who wants to know what they should eat are also seeking knowledge that they might love. I spoke about research which advances the field.
As far as I understand the research into the ability of people who judge grants to predict how much influence a result would have decades later is very poor. Even estimating effect at the time a paper is published is very hard.
Most published paper turn out to have no practical application. Most nutritition papers try to answer practical questions for which the field currently doesn’t have the ability to provide good answers.
In Feymann’s cargo cult speech he talked about how Mr. Young did research on how to run psychology research on rats in a way that yields much better results. His research community unfortunately ignored him to the point that it’s not possible to locate his paper but if he would have been heard his research would have effects that are much larger then another random rat experiment with dubious methology.
For the field of nutrition science to progress we would likely need a lot of people like Mr. Young who think about how to actually make progress in learning about the field even if that’s in the beginning far from practical application.
What is the difference between love of knowledge and “advancing the field”? Most researchers seem to focus on questions that are some combination of (i) interesting personally to them (ii) would bring them fame and (iii) would bring them grants. It would be awfully convenient for them if that is literally the best estimate you could make of what research will ultimately be useful, but I doubt it is case. Some research that “advances the field” is actively harmful (e.g. advancing AI capabilities without advancing understanding, improving the ability to create synthetic pandemics, creating other technologies that are easy to weaponize by bad actors, creating technology that shifts economic incentives towards massive environmental damage...)
Love of knowledge can drive you to engage with questions that aren’t addressable with the current tools of a field in a way that brings the field forward.
Work that’s advancing the field is work on which other scientists can build.
In physics scientists use significance thresholds that are much more stringent then 5%. If you would tell nutrition researchers that they could only publish findings that have 5 sigma’s they would be forced to run studies that are very differently structured. Those studies would provide answers that are a lot less interesting but to the extend that researchers manage to make findings those findings would be reliable and would allow other researchers to build on them.
I’m not saying that this is the only way to move forward for nutrition science but if I think the field would need to think much better about how progress can be made then running the kind of studies that they run currently.
Safety concerns are a valid concern and increase in capability in certain fields like AI might now be desirable for it’s own sake.
I think we probably use the phrase “love of knowledge” differently. The way I see it, if you love knowledge then you must engage with questions addressable with the current tools in a way that brings the field forward, otherwise you are not gaining any knowledge, you are just wasting your time or fooling yourself and others. If certain scientists get spurious results because of poor methodology, there is no love of knowledge in it. I also don’t think they use poor methodology because of desire for knowledge at all: rather, they probably do it because of the pressure to publish and because of the osmosis of some unhealthy culture in their field.
Or, “it’s too hard”. Or, “I don’t think I am good enough”. Or plenty of other excuses that are not necessarily a good reason for not doing the thing.
The point is not to have an answer, but to ask the question and to check.
You are not smarter for having the answer, you are smarter for asking the question.
I agree with the general principle, it’s just that, my impression is that most scientists have asked themselves this question and made more or less reasonable decisions regarding it, with respect to the scale of importance prevalent in the academia. From my (moderate amount of) experience, most scientists would love to crack the biggest problem in their field if they think they have a good shot at it.
So, I’m not actually sure. I’m taking at face value that there *was* a guy who went around asking the question, and that it was fairly unusual and provoked weird enough reactions to become somewhat mythological. (Although I wouldn’t be that surprised if the mythology turned out to be be false).
But it’s not that surprising to me that many people would end up working on some random thing because it was expedient, or without having reflected much on what they should be working on at all. These seems to be the way people are by default.
The way I understand it, Hamming was a real guy doing real annoying questions in Bell labs.
That’s my understanding too, I just wouldn’t be that surprised if that story went through a few games of telephone before it reached us.
I think the first version that reached me through the rationality sphere had Hamming asking the all the questions on the same day.
A bit later there was a local rationalist who got a different version of the story through family connection and a Bell labs source. In that story Hamming asked “What’s the most important question” in week 1, “What are you working on?” in week 2 and “Why isn’t that the same?” in week 3.
(Pun intended? The former name of Bell Labs, and so on...)
Oh lol. No, unfortunately.