80,000 Hours’ data suggests that people are the bottleneck, not funding. Could you tell me why you think otherwise? It’s possible that there’s even more available funding in AI research and similar fields that are likely sources for FAI researchers.
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource),
They write no such thing. They do say:
Might you have a shot of getting into a top 5 graduate school in machine learning? This is a reasonable proxy for whether you can get a job at a top AI research centre, though it’s not a requirement.
They use it as a proxy for cognitive ability. It’s possible for a person who writes insightful AI alignment forum posts to hired into an AI research role. It’s just very hard to develop the ability to write insightful things about AI alignment and the kind of person who can is also the kind of person who can get into a top 5 graduate school in machine learning.
When it comes to increasing the number of AI Phd’s that can accelerate AI development in general, so it’s problematic from the perspective of AI risk.
They don’t speak about a having a PhD but ability to get a into a top 5 graduate program. Many people who do have the ability to get into a top 5 program don’t get into a top 5 graduate program but persue other directions.
The number of people with that ability level is not directly dependent on the amount of of PhD’s that are given out.
80,000 Hours’ data suggests that people are the bottleneck, not funding. Could you tell me why you think otherwise? It’s possible that there’s even more available funding in AI research and similar fields that are likely sources for FAI researchers.
(First read my comment on the sister comment: https://www.lesswrong.com/posts/hKNJSiyzB5jDKFytn/open-and-welcome-thread-may-2021?commentId=iLrAts3ghiBc37X3j )
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
They write no such thing. They do say:
They use it as a proxy for cognitive ability. It’s possible for a person who writes insightful AI alignment forum posts to hired into an AI research role. It’s just very hard to develop the ability to write insightful things about AI alignment and the kind of person who can is also the kind of person who can get into a top 5 graduate school in machine learning.
When it comes to increasing the number of AI Phd’s that can accelerate AI development in general, so it’s problematic from the perspective of AI risk.
[Deleted]
They don’t speak about a having a PhD but ability to get a into a top 5 graduate program. Many people who do have the ability to get into a top 5 program don’t get into a top 5 graduate program but persue other directions.
The number of people with that ability level is not directly dependent on the amount of of PhD’s that are given out.
[Deleted]