I think the whole FAI research is mostly bottlenecked by funding; There are many smart people who will work in any field that has funding available (in my model of the world). So unless you’re someone who does not need funding or can fund others, you might not be part of the bottleneck.
I am really quite confident that the space is not bottlenecked by funding. Maybe we have different conceptions of what we mean by funding, but there really is a lot of money (~$5-10 Billion USD) that is ready to be deployed towards promising AI Alignment opportunities, there just aren’t any that seem very promising and aren’t already funded. It really seems to me that funding is very unlikely the bottleneck for the space.
I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
Solution space is really high-dimensional, so just funding random points has basically no chance of getting you much closer to a functioning solution. There aren’t even enough people who understand what the AI Alignment problem is to fund all of them, and frequently funding people can have downsides. Two common downsides of funding people:
They have an effect on the social context in which work happens, and if they don’t do good work, they scare away other contributors, or worsen the methodology of your field
If you give away money like candy, you attract lots of people who will try to pretend doing the work you want to do and just take away your money. There are definitely enough people who just want to take your money to exhaust $10B in financial resources (or really any reasonable amount of resources). In a lemon’s market, you need to maintain some level of vigilance, otherwise you can easily lose all of your resources at almost any level of wealth.
80,000 Hours’ data suggests that people are the bottleneck, not funding. Could you tell me why you think otherwise? It’s possible that there’s even more available funding in AI research and similar fields that are likely sources for FAI researchers.
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource),
They write no such thing. They do say:
Might you have a shot of getting into a top 5 graduate school in machine learning? This is a reasonable proxy for whether you can get a job at a top AI research centre, though it’s not a requirement.
They use it as a proxy for cognitive ability. It’s possible for a person who writes insightful AI alignment forum posts to hired into an AI research role. It’s just very hard to develop the ability to write insightful things about AI alignment and the kind of person who can is also the kind of person who can get into a top 5 graduate school in machine learning.
When it comes to increasing the number of AI Phd’s that can accelerate AI development in general, so it’s problematic from the perspective of AI risk.
They don’t speak about a having a PhD but ability to get a into a top 5 graduate program. Many people who do have the ability to get into a top 5 program don’t get into a top 5 graduate program but persue other directions.
The number of people with that ability level is not directly dependent on the amount of of PhD’s that are given out.
I think the whole FAI research is mostly bottlenecked by funding; There are many smart people who will work in any field that has funding available (in my model of the world). So unless you’re someone who does not need funding or can fund others, you might not be part of the bottleneck.
I am really quite confident that the space is not bottlenecked by funding. Maybe we have different conceptions of what we mean by funding, but there really is a lot of money (~$5-10 Billion USD) that is ready to be deployed towards promising AI Alignment opportunities, there just aren’t any that seem very promising and aren’t already funded. It really seems to me that funding is very unlikely the bottleneck for the space.
I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
Solution space is really high-dimensional, so just funding random points has basically no chance of getting you much closer to a functioning solution. There aren’t even enough people who understand what the AI Alignment problem is to fund all of them, and frequently funding people can have downsides. Two common downsides of funding people:
They have an effect on the social context in which work happens, and if they don’t do good work, they scare away other contributors, or worsen the methodology of your field
If you give away money like candy, you attract lots of people who will try to pretend doing the work you want to do and just take away your money. There are definitely enough people who just want to take your money to exhaust $10B in financial resources (or really any reasonable amount of resources). In a lemon’s market, you need to maintain some level of vigilance, otherwise you can easily lose all of your resources at almost any level of wealth.
One good example of what funding can do is nanotech. https://www.lesswrong.com/posts/Ck5cgNS2Eozc8mBeJ/a-review-of-where-is-my-flying-car-by-j-storrs-hall describes how strong funding killed of the nanotech industry by getting people to compete for that funding.
80,000 Hours’ data suggests that people are the bottleneck, not funding. Could you tell me why you think otherwise? It’s possible that there’s even more available funding in AI research and similar fields that are likely sources for FAI researchers.
(First read my comment on the sister comment: https://www.lesswrong.com/posts/hKNJSiyzB5jDKFytn/open-and-welcome-thread-may-2021?commentId=iLrAts3ghiBc37X3j )
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
They write no such thing. They do say:
They use it as a proxy for cognitive ability. It’s possible for a person who writes insightful AI alignment forum posts to hired into an AI research role. It’s just very hard to develop the ability to write insightful things about AI alignment and the kind of person who can is also the kind of person who can get into a top 5 graduate school in machine learning.
When it comes to increasing the number of AI Phd’s that can accelerate AI development in general, so it’s problematic from the perspective of AI risk.
[Deleted]
They don’t speak about a having a PhD but ability to get a into a top 5 graduate program. Many people who do have the ability to get into a top 5 program don’t get into a top 5 graduate program but persue other directions.
The number of people with that ability level is not directly dependent on the amount of of PhD’s that are given out.
[Deleted]