There are lots of ways a researcher can choose to adopt new productivity habits. They include:
Inside view, reasoning from first principles
Outside view, copying what successful researchers do
The purpose of this post is to, from an outside view perspective, list what a class of researchers (professors) does, which happens to operate very differently from AI safety.
Once again, I am not claiming to have an inside view argument in favor of the adoption of each of these attributes. I do not have empirics. I am not claiming to have an airtight causal model. If you will refer back to the original post, you will notice that I was careful to call this a list of attributes coming from anecdotal evidence, and if you will refer back to the AI safety section, you will notice that I was careful to call my points considerations and not conclusions.
You keep arguing against a claim which I’ve never put forward, which is something like “The bullshit in academia (publish or perish, positive results give better papers) causes better research to happen.” Of course I disagree with this claim. There is no need to waste ink arguing against it.
It seems like the actual crux we disagree on is: “How similar are the goals success in academia with success in doing good (AI safety) research?” If I had to guess the source of our disagreement, I might speculate that we’ve both heard the same stories about the replication crisis, the inefficiencies of grant proposals and peer review, and other bullshit in academia. But, I’ve additionally encountered a great deal of anecdotal evidence indicating: in spite of all this bullshit, the people at the top seem to overwhelmingly not be bogged down by it, and the first-order factor in them getting where they are was in fact research quality. The way to convince you of this fact might be to repeat the methodology used in Childhoods of exceptional people, but this would be incredibly time consuming. (I’ll give you 1/20th of such a blog post for free: here’s Terry Tao on time management.)
This crux clears up our correlation vs causation disagreement: since I think the goals are very similar, correlation is evidence for causation, whereas since you think the goals are very different, it seems like you think many of the attributes I’ve listed are primarily relevant for the ‘navigating academic bullshit’ part of academia.
I’ve addressed your comment in broad terms, but just to conclude I wanted to respond to one point you made which seems especially wrong.
how did e.g. networking [...] enable them to get to these [impressive research] findings?
In the networking section, you will find that I defined “networking” as “knowing many people doing research in and outside your field, so that you can easily reach out to them to request a collaboration”. People are more likely to respond to collaboration requests from acquaintances than from strangers. Thus for this particular attribute you actually do get a causal model: networking causes collaborations which causes better research results. I guess you can dispute the claim “collaborations cause better research results”, but I think this would be an odd hill to die on, considering most interdisciplinary work relies on collaborations.
What I’m questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment, and therefore productivity practices derived from other academic settings will be helpful. Why should this be the case when, over the past few years, most of the AI capabilities research has occurred in corporate research labs?
Some of your suggestions, of course, work equally well in either environment. But not all, and even the ones which do work would require a shift in emphasis. For example, when you say professors should be acquainted with other professors, that’s valid in academia, where roughly everyone who matters either has tenure or is on a tenure track. However, that is not true in a corporate environment, where many people may not even have PhDs. Furthermore, in a corporate environment, limiting one’s networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.
Prioritizing high value research and ignoring everything else is a skill that works in both corporate and academic environments. But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers. Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don’t pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.
I also question your claim that academic bureaucracy doesn’t slow good researchers down very much. That’s very much not in line with what anecdotes I’ve heard. From what I’ve seen, writing grant proposals, dealing with university bureaucracy, and teaching responsibilities are a significant time suck. Maybe with practice and experience, it’s possible for a good researcher to complete these tasks on “autopilot”, and therefore not notice the time that’s being spent. But the tasks are still costing time and mental energy that, ideally, would be devoted to research or writing.
I don’t think it’s inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don’t think that academia taking over AI safety research would be a good thing. For this reason I question whether it’s valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.
Any researcher, inside or outside of academia, might consider emulating attributes successful professors have in order to boost personal research productivity.
AI safety researchers outside of academia should try harder to make their legible to academics, as a cheap way to get more good researchers thinking about AI safety.
What I’m questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment [...]
This assumption is not implicit, you’re putting together (1) and (2) in a way which I did not intend.
Furthermore, in a corporate environment, limiting one’s networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.
I agree but this is not a counterargument against my post. This is just an incredibly reasonable interpretation of what it means to be “good at networking” for a industry researcher.
But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers.
My post is not literally recommending that non-academics 80⁄20 their teaching. I am confused why you think that I would think this. 80/20ing teaching is an example of how professors allocate their time to what’s important. Professors are being used as a case study in the post. When applied to an AI safety researcher who works independently or as part of an industry lab, perhaps “teaching” might be replaced with “responding to cold emails” or “supervising an intern”. I acknowledge that professors spend more time teaching than non-academic researchers spend doing these tasks. But once again, the point of this post is just to list a bunch of things successful professors do, and then non-professors are meant to consider these points and adapt the advice to their own environment.
Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don’t pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.
This seems like a crux. It seems like I am more optimistic about leveraging academic labor and expertise, and you are more optimistic about deploying AI safety solutions to existing systems.
I also question your claim that academic bureaucracy doesn’t slow good researchers down very much. That’s very much not in line with what anecdotes I’ve heard. [...]
This is another crux. We both have heard different anecdotal evidence and are weighing it differently.
I don’t think it’s inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don’t think that academia taking over AI safety research would be a good thing.
I never said that academia would take over AI safety research, and I also never said this would be a good thing. I believe that there is a lot of untapped free skilled labor in academia, and AI safety researchers should put in more of an effort (e.g. by writing papers) to put that labor to use.
For this reason I question whether it’s valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.
One of the attributes I list is literally time management. As for the other two, I think it depends on the kind of AI safety researcher we are talking about—going directly back to our “leveraging academia” versus “product development” crux. I agree that if what you’re trying to do is product development, that the skills you list are critical. But also, I think product development is not at all the only way to do AI safety, and other ways to do AI safety more easily plug into academia.
There are lots of ways a researcher can choose to adopt new productivity habits. They include:
Inside view, reasoning from first principles
Outside view, copying what successful researchers do
The purpose of this post is to, from an outside view perspective, list what a class of researchers (professors) does, which happens to operate very differently from AI safety.
Once again, I am not claiming to have an inside view argument in favor of the adoption of each of these attributes. I do not have empirics. I am not claiming to have an airtight causal model. If you will refer back to the original post, you will notice that I was careful to call this a list of attributes coming from anecdotal evidence, and if you will refer back to the AI safety section, you will notice that I was careful to call my points considerations and not conclusions.
You keep arguing against a claim which I’ve never put forward, which is something like “The bullshit in academia (publish or perish, positive results give better papers) causes better research to happen.” Of course I disagree with this claim. There is no need to waste ink arguing against it.
It seems like the actual crux we disagree on is: “How similar are the goals success in academia with success in doing good (AI safety) research?” If I had to guess the source of our disagreement, I might speculate that we’ve both heard the same stories about the replication crisis, the inefficiencies of grant proposals and peer review, and other bullshit in academia. But, I’ve additionally encountered a great deal of anecdotal evidence indicating: in spite of all this bullshit, the people at the top seem to overwhelmingly not be bogged down by it, and the first-order factor in them getting where they are was in fact research quality. The way to convince you of this fact might be to repeat the methodology used in Childhoods of exceptional people, but this would be incredibly time consuming. (I’ll give you 1/20th of such a blog post for free: here’s Terry Tao on time management.)
This crux clears up our correlation vs causation disagreement: since I think the goals are very similar, correlation is evidence for causation, whereas since you think the goals are very different, it seems like you think many of the attributes I’ve listed are primarily relevant for the ‘navigating academic bullshit’ part of academia.
I’ve addressed your comment in broad terms, but just to conclude I wanted to respond to one point you made which seems especially wrong.
In the networking section, you will find that I defined “networking” as “knowing many people doing research in and outside your field, so that you can easily reach out to them to request a collaboration”. People are more likely to respond to collaboration requests from acquaintances than from strangers. Thus for this particular attribute you actually do get a causal model: networking causes collaborations which causes better research results. I guess you can dispute the claim “collaborations cause better research results”, but I think this would be an odd hill to die on, considering most interdisciplinary work relies on collaborations.
What I’m questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment, and therefore productivity practices derived from other academic settings will be helpful. Why should this be the case when, over the past few years, most of the AI capabilities research has occurred in corporate research labs?
Some of your suggestions, of course, work equally well in either environment. But not all, and even the ones which do work would require a shift in emphasis. For example, when you say professors should be acquainted with other professors, that’s valid in academia, where roughly everyone who matters either has tenure or is on a tenure track. However, that is not true in a corporate environment, where many people may not even have PhDs. Furthermore, in a corporate environment, limiting one’s networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.
Prioritizing high value research and ignoring everything else is a skill that works in both corporate and academic environments. But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers. Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don’t pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.
I also question your claim that academic bureaucracy doesn’t slow good researchers down very much. That’s very much not in line with what anecdotes I’ve heard. From what I’ve seen, writing grant proposals, dealing with university bureaucracy, and teaching responsibilities are a significant time suck. Maybe with practice and experience, it’s possible for a good researcher to complete these tasks on “autopilot”, and therefore not notice the time that’s being spent. But the tasks are still costing time and mental energy that, ideally, would be devoted to research or writing.
I don’t think it’s inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don’t think that academia taking over AI safety research would be a good thing. For this reason I question whether it’s valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.
I have 2 separate claims:
Any researcher, inside or outside of academia, might consider emulating attributes successful professors have in order to boost personal research productivity.
AI safety researchers outside of academia should try harder to make their legible to academics, as a cheap way to get more good researchers thinking about AI safety.
This assumption is not implicit, you’re putting together (1) and (2) in a way which I did not intend.
I agree but this is not a counterargument against my post. This is just an incredibly reasonable interpretation of what it means to be “good at networking” for a industry researcher.
My post is not literally recommending that non-academics 80⁄20 their teaching. I am confused why you think that I would think this. 80/20ing teaching is an example of how professors allocate their time to what’s important. Professors are being used as a case study in the post. When applied to an AI safety researcher who works independently or as part of an industry lab, perhaps “teaching” might be replaced with “responding to cold emails” or “supervising an intern”. I acknowledge that professors spend more time teaching than non-academic researchers spend doing these tasks. But once again, the point of this post is just to list a bunch of things successful professors do, and then non-professors are meant to consider these points and adapt the advice to their own environment.
This seems like a crux. It seems like I am more optimistic about leveraging academic labor and expertise, and you are more optimistic about deploying AI safety solutions to existing systems.
This is another crux. We both have heard different anecdotal evidence and are weighing it differently.
I never said that academia would take over AI safety research, and I also never said this would be a good thing. I believe that there is a lot of untapped free skilled labor in academia, and AI safety researchers should put in more of an effort (e.g. by writing papers) to put that labor to use.
One of the attributes I list is literally time management. As for the other two, I think it depends on the kind of AI safety researcher we are talking about—going directly back to our “leveraging academia” versus “product development” crux. I agree that if what you’re trying to do is product development, that the skills you list are critical. But also, I think product development is not at all the only way to do AI safety, and other ways to do AI safety more easily plug into academia.