This argument has been had before on lesswrong. Usually the counter here is that we don’t actually know ahead of time who the top 20 people are, and so need to experiment & would do well to hedge our bets, which is the main constraint to getting a top 20. Currently we do this but only really do it for 1-2 years, but historically it actually takes more like 5 years to reveal yourself as a top 20, and I’d guess it actually actually can take more like 10 years.
So why not that funding model? Mostly a money thing.
I expect you will argue that in fact revealing yourself as a top 20 happens in fewer than 5 years, if you do argue.
Hmm, I really just mean that “labor” is probably the most important input to the current production function. I don’t want to make a claim that there aren’t better ways of doing things.
Ok, but when we ask why this constraint is tight, the answer is because there’s not enough funding. We can’t just increase the size of the field 10x in order to get 10x more top-20 researchers, because we don’t have the money for that.
For example, suppose MATS suddenly & magically scaled up 10x, and their next cohort was 1,000 people. Would this dramatically change the state of the field? I don’t think so.
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
I do think so, especially if they also increased/decentralized more their grantmaking capacity, and perhaps increased the field-building capacity earlier in the pipeline (e.g. AGISF, ML4G, etc., though I expect those programs to mostly be doing differentially quite well and not to be the main bottlenecks).
So why not that funding model? Mostly a money thing.
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).
But legibility is a separate issue. If there are people who would potentially be good safety reseachers, but they get turned away by recruiters because they don’t have a legibly impressive resume, then you have the companies lacking employees they would do well with if they had.
So, companies could be less constrained on people if they were more thorough in evaluating people on more than shallow easily-legible qualities.
Spending more money on this recruitment evaluation would thus help alleviate lack of good researchers. So money is tied into person-shortage in this additional way.
Here’s my recommendation for solving this problem with money: have paid 1-2 month work trials for applicants. The person you hire to oversee these doesn’t have to be super-competent themselves, they mostly a people-ops person coordinating the work-trialers. The outputs of the work could be relatively easily judged with just a bit of work from the candidate team (a validation-easier-than-production situation), and the physical co-location would give ample time for watercooler conversations to reveal culture-fit.
Here’s another suggestion: how about telling the recruiters to spend the time to check personal references? This is rarely, if ever, done in my experience.
Hiring being highly selective does not imply things aren’t constrained on people.
Getting 10x as many people as good as the top 20 best safety researchers would make a huge difference.
This argument has been had before on lesswrong. Usually the counter here is that we don’t actually know ahead of time who the top 20 people are, and so need to experiment & would do well to hedge our bets, which is the main constraint to getting a top 20. Currently we do this but only really do it for 1-2 years, but historically it actually takes more like 5 years to reveal yourself as a top 20, and I’d guess it actually actually can take more like 10 years.
So why not that funding model? Mostly a money thing.
I expect you will argue that in fact revealing yourself as a top 20 happens in fewer than 5 years, if you do argue.
Hmm, I really just mean that “labor” is probably the most important input to the current production function. I don’t want to make a claim that there aren’t better ways of doing things.
Ok, but when we ask why this constraint is tight, the answer is because there’s not enough funding. We can’t just increase the size of the field 10x in order to get 10x more top-20 researchers, because we don’t have the money for that.
For example, suppose MATS suddenly & magically scaled up 10x, and their next cohort was 1,000 people. Would this dramatically change the state of the field? I don’t think so.
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
I do think so, especially if they also increased/decentralized more their grantmaking capacity, and perhaps increased the field-building capacity earlier in the pipeline (e.g. AGISF, ML4G, etc., though I expect those programs to mostly be doing differentially quite well and not to be the main bottlenecks).
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).
But legibility is a separate issue. If there are people who would potentially be good safety reseachers, but they get turned away by recruiters because they don’t have a legibly impressive resume, then you have the companies lacking employees they would do well with if they had.
So, companies could be less constrained on people if they were more thorough in evaluating people on more than shallow easily-legible qualities.
Spending more money on this recruitment evaluation would thus help alleviate lack of good researchers. So money is tied into person-shortage in this additional way.
I agree that suboptimal recruiting/hiring also causes issues, but it isn’t easy to solve this problem with money.
Here’s my recommendation for solving this problem with money: have paid 1-2 month work trials for applicants. The person you hire to oversee these doesn’t have to be super-competent themselves, they mostly a people-ops person coordinating the work-trialers. The outputs of the work could be relatively easily judged with just a bit of work from the candidate team (a validation-easier-than-production situation), and the physical co-location would give ample time for watercooler conversations to reveal culture-fit.
Here’s another suggestion: how about telling the recruiters to spend the time to check personal references? This is rarely, if ever, done in my experience.
I’m pretty sure Ryan is rejecting the claim that the people hiring for the roles in question are worse-than-average at detecting illegible talent.