Any researcher, inside or outside of academia, might consider emulating attributes successful professors have in order to boost personal research productivity.
AI safety researchers outside of academia should try harder to make their legible to academics, as a cheap way to get more good researchers thinking about AI safety.
What I’m questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment [...]
This assumption is not implicit, you’re putting together (1) and (2) in a way which I did not intend.
Furthermore, in a corporate environment, limiting one’s networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.
I agree but this is not a counterargument against my post. This is just an incredibly reasonable interpretation of what it means to be “good at networking” for a industry researcher.
But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers.
My post is not literally recommending that non-academics 80⁄20 their teaching. I am confused why you think that I would think this. 80/20ing teaching is an example of how professors allocate their time to what’s important. Professors are being used as a case study in the post. When applied to an AI safety researcher who works independently or as part of an industry lab, perhaps “teaching” might be replaced with “responding to cold emails” or “supervising an intern”. I acknowledge that professors spend more time teaching than non-academic researchers spend doing these tasks. But once again, the point of this post is just to list a bunch of things successful professors do, and then non-professors are meant to consider these points and adapt the advice to their own environment.
Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don’t pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.
This seems like a crux. It seems like I am more optimistic about leveraging academic labor and expertise, and you are more optimistic about deploying AI safety solutions to existing systems.
I also question your claim that academic bureaucracy doesn’t slow good researchers down very much. That’s very much not in line with what anecdotes I’ve heard. [...]
This is another crux. We both have heard different anecdotal evidence and are weighing it differently.
I don’t think it’s inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don’t think that academia taking over AI safety research would be a good thing.
I never said that academia would take over AI safety research, and I also never said this would be a good thing. I believe that there is a lot of untapped free skilled labor in academia, and AI safety researchers should put in more of an effort (e.g. by writing papers) to put that labor to use.
For this reason I question whether it’s valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.
One of the attributes I list is literally time management. As for the other two, I think it depends on the kind of AI safety researcher we are talking about—going directly back to our “leveraging academia” versus “product development” crux. I agree that if what you’re trying to do is product development, that the skills you list are critical. But also, I think product development is not at all the only way to do AI safety, and other ways to do AI safety more easily plug into academia.
I have 2 separate claims:
Any researcher, inside or outside of academia, might consider emulating attributes successful professors have in order to boost personal research productivity.
AI safety researchers outside of academia should try harder to make their legible to academics, as a cheap way to get more good researchers thinking about AI safety.
This assumption is not implicit, you’re putting together (1) and (2) in a way which I did not intend.
I agree but this is not a counterargument against my post. This is just an incredibly reasonable interpretation of what it means to be “good at networking” for a industry researcher.
My post is not literally recommending that non-academics 80⁄20 their teaching. I am confused why you think that I would think this. 80/20ing teaching is an example of how professors allocate their time to what’s important. Professors are being used as a case study in the post. When applied to an AI safety researcher who works independently or as part of an industry lab, perhaps “teaching” might be replaced with “responding to cold emails” or “supervising an intern”. I acknowledge that professors spend more time teaching than non-academic researchers spend doing these tasks. But once again, the point of this post is just to list a bunch of things successful professors do, and then non-professors are meant to consider these points and adapt the advice to their own environment.
This seems like a crux. It seems like I am more optimistic about leveraging academic labor and expertise, and you are more optimistic about deploying AI safety solutions to existing systems.
This is another crux. We both have heard different anecdotal evidence and are weighing it differently.
I never said that academia would take over AI safety research, and I also never said this would be a good thing. I believe that there is a lot of untapped free skilled labor in academia, and AI safety researchers should put in more of an effort (e.g. by writing papers) to put that labor to use.
One of the attributes I list is literally time management. As for the other two, I think it depends on the kind of AI safety researcher we are talking about—going directly back to our “leveraging academia” versus “product development” crux. I agree that if what you’re trying to do is product development, that the skills you list are critical. But also, I think product development is not at all the only way to do AI safety, and other ways to do AI safety more easily plug into academia.