people sometimes talk about whether you should go into policy, ai research, ea direct work, etc. but afaict all of those fields work like normal careers where actually you have to spend several years resume-building before painstakingly convincing people you’re worth hiring for paid work. so imo these are not actually high-leverage paths to impact and the fields are not in fact short on people.
Depends on what you mean by “resume building”, but I don’t think this is true for “need to do a bunch of AI safety work for free” or similar. i.e. for technical research, many people that have gone through MATS and then been hired at or founded their own safety orgs have no prior experience doing anything that looks like AI safety research, and some don’t even have much in the way of ML backgrounds. Many people switch directly out of industry careers into doing e.g. ops or software work that isn’t technical research. Policy might seem a bit trickier but I know several people who did not spend anything like years doing resume building before finding policy roles or starting their own policy orgs and getting funding. (Though I think policy might actually be the most “straightforward” to break into, since all you need to do to demonstrate compentence is publish a sufficiently good written artifact; admittedly this is mostly for starting your own thing. If you want to get hired at a “larger” policy org resume building might matter more.)
This argument has been had before on lesswrong. Usually the counter here is that we don’t actually know ahead of time who the top 20 people are, and so need to experiment & would do well to hedge our bets, which is the main constraint to getting a top 20. Currently we do this but only really do it for 1-2 years, but historically it actually takes more like 5 years to reveal yourself as a top 20, and I’d guess it actually actually can take more like 10 years.
So why not that funding model? Mostly a money thing.
I expect you will argue that in fact revealing yourself as a top 20 happens in fewer than 5 years, if you do argue.
Hmm, I really just mean that “labor” is probably the most important input to the current production function. I don’t want to make a claim that there aren’t better ways of doing things.
Ok, but when we ask why this constraint is tight, the answer is because there’s not enough funding. We can’t just increase the size of the field 10x in order to get 10x more top-20 researchers, because we don’t have the money for that.
For example, suppose MATS suddenly & magically scaled up 10x, and their next cohort was 1,000 people. Would this dramatically change the state of the field? I don’t think so.
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
I do think so, especially if they also increased/decentralized more their grantmaking capacity, and perhaps increased the field-building capacity earlier in the pipeline (e.g. AGISF, ML4G, etc., though I expect those programs to mostly be doing differentially quite well and not to be the main bottlenecks).
So why not that funding model? Mostly a money thing.
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).
But legibility is a separate issue. If there are people who would potentially be good safety reseachers, but they get turned away by recruiters because they don’t have a legibly impressive resume, then you have the companies lacking employees they would do well with if they had.
So, companies could be less constrained on people if they were more thorough in evaluating people on more than shallow easily-legible qualities.
Spending more money on this recruitment evaluation would thus help alleviate lack of good researchers. So money is tied into person-shortage in this additional way.
Here’s my recommendation for solving this problem with money: have paid 1-2 month work trials for applicants. The person you hire to oversee these doesn’t have to be super-competent themselves, they mostly a people-ops person coordinating the work-trialers. The outputs of the work could be relatively easily judged with just a bit of work from the candidate team (a validation-easier-than-production situation), and the physical co-location would give ample time for watercooler conversations to reveal culture-fit.
Here’s another suggestion: how about telling the recruiters to spend the time to check personal references? This is rarely, if ever, done in my experience.
Putting this short rant here for no particularly good reason but I dislike that people claim constraints here or there in a way where I guess their intended meaning is only that “the derivative with respect to that input is higher than for the other inputs”.
On factory floors there exist hard constraints, the throughput is limited by the slowest machine (when everything has to go through this). The AI Safety world is obviously not like that. Increase funding and more work gets done, increase talent and more work gets done. None are hard constraints.
If I’m right that people are really only claiming the weak version, then I’d like to see somewhat more backing to their claims, especially if you say “definitely”. Since none are constraints, the derivatives could plausibly be really close to one another. In fact, they kind of have to be, because there are smart optimizers who are deciding where to spend their funding and trying to actively manage the proportion of money sent to field building (getting more talent) vs direct work.
There is not a difference between the two situations in the way you’re claiming, and indeed the differentiation point of view is used fruitfully on both factory floors and in more complex convex optimization problems. For example, see the connection between dual variables and their indication of how slack or taught constraints are in convex optimization, and how this can be interpreted as a relative tradeoff price between each of the constrained resources.
In your factory floor example, the constraints would be the throughput of each machine, and (assuming you’re trying to maximize the throughput of the entire process), the dual variables would be zero everywhere except at that machine where it is the negative derivative of the throughput of the entire process with respect to the throughput of the constraining machine, and we could determine indeed the tight constraint is the throughput of the relevant machine by looking at the derivative which is significantly greater than all others.
you have to spend several years resume-building before painstakingly convincing people you’re worth hiring for paid work
For government roles, I think “years of experience” is definitely an important factor. But I don’t think you need to have been specializing for government roles specifically.
Especially for AI policy, there are several programs that are basically like “hey, if you have AI expertise but no background in policy, we want your help.” To be clear, these are often still fairly competitive, but I think it’s much more about being generally capable/competent and less about having optimized your resume for policy roles.
people sometimes talk about whether you should go into policy, ai research, ea direct work, etc. but afaict all of those fields work like normal careers where actually you have to spend several years resume-building before painstakingly convincing people you’re worth hiring for paid work. so imo these are not actually high-leverage paths to impact and the fields are not in fact short on people.
Depends on what you mean by “resume building”, but I don’t think this is true for “need to do a bunch of AI safety work for free” or similar. i.e. for technical research, many people that have gone through MATS and then been hired at or founded their own safety orgs have no prior experience doing anything that looks like AI safety research, and some don’t even have much in the way of ML backgrounds. Many people switch directly out of industry careers into doing e.g. ops or software work that isn’t technical research. Policy might seem a bit trickier but I know several people who did not spend anything like years doing resume building before finding policy roles or starting their own policy orgs and getting funding. (Though I think policy might actually be the most “straightforward” to break into, since all you need to do to demonstrate compentence is publish a sufficiently good written artifact; admittedly this is mostly for starting your own thing. If you want to get hired at a “larger” policy org resume building might matter more.)
Hiring being highly selective does not imply things aren’t constrained on people.
Getting 10x as many people as good as the top 20 best safety researchers would make a huge difference.
This argument has been had before on lesswrong. Usually the counter here is that we don’t actually know ahead of time who the top 20 people are, and so need to experiment & would do well to hedge our bets, which is the main constraint to getting a top 20. Currently we do this but only really do it for 1-2 years, but historically it actually takes more like 5 years to reveal yourself as a top 20, and I’d guess it actually actually can take more like 10 years.
So why not that funding model? Mostly a money thing.
I expect you will argue that in fact revealing yourself as a top 20 happens in fewer than 5 years, if you do argue.
Hmm, I really just mean that “labor” is probably the most important input to the current production function. I don’t want to make a claim that there aren’t better ways of doing things.
Ok, but when we ask why this constraint is tight, the answer is because there’s not enough funding. We can’t just increase the size of the field 10x in order to get 10x more top-20 researchers, because we don’t have the money for that.
For example, suppose MATS suddenly & magically scaled up 10x, and their next cohort was 1,000 people. Would this dramatically change the state of the field? I don’t think so.
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
I do think so, especially if they also increased/decentralized more their grantmaking capacity, and perhaps increased the field-building capacity earlier in the pipeline (e.g. AGISF, ML4G, etc., though I expect those programs to mostly be doing differentially quite well and not to be the main bottlenecks).
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).
But legibility is a separate issue. If there are people who would potentially be good safety reseachers, but they get turned away by recruiters because they don’t have a legibly impressive resume, then you have the companies lacking employees they would do well with if they had.
So, companies could be less constrained on people if they were more thorough in evaluating people on more than shallow easily-legible qualities.
Spending more money on this recruitment evaluation would thus help alleviate lack of good researchers. So money is tied into person-shortage in this additional way.
I agree that suboptimal recruiting/hiring also causes issues, but it isn’t easy to solve this problem with money.
Here’s my recommendation for solving this problem with money: have paid 1-2 month work trials for applicants. The person you hire to oversee these doesn’t have to be super-competent themselves, they mostly a people-ops person coordinating the work-trialers. The outputs of the work could be relatively easily judged with just a bit of work from the candidate team (a validation-easier-than-production situation), and the physical co-location would give ample time for watercooler conversations to reveal culture-fit.
Here’s another suggestion: how about telling the recruiters to spend the time to check personal references? This is rarely, if ever, done in my experience.
I’m pretty sure Ryan is rejecting the claim that the people hiring for the roles in question are worse-than-average at detecting illegible talent.
Yes, the field is definitely more funding constrained than talent constrained right now
Putting this short rant here for no particularly good reason but I dislike that people claim constraints here or there in a way where I guess their intended meaning is only that “the derivative with respect to that input is higher than for the other inputs”.
On factory floors there exist hard constraints, the throughput is limited by the slowest machine (when everything has to go through this). The AI Safety world is obviously not like that. Increase funding and more work gets done, increase talent and more work gets done. None are hard constraints.
If I’m right that people are really only claiming the weak version, then I’d like to see somewhat more backing to their claims, especially if you say “definitely”. Since none are constraints, the derivatives could plausibly be really close to one another. In fact, they kind of have to be, because there are smart optimizers who are deciding where to spend their funding and trying to actively manage the proportion of money sent to field building (getting more talent) vs direct work.
There is not a difference between the two situations in the way you’re claiming, and indeed the differentiation point of view is used fruitfully on both factory floors and in more complex convex optimization problems. For example, see the connection between dual variables and their indication of how slack or taught constraints are in convex optimization, and how this can be interpreted as a relative tradeoff price between each of the constrained resources.
In your factory floor example, the constraints would be the throughput of each machine, and (assuming you’re trying to maximize the throughput of the entire process), the dual variables would be zero everywhere except at that machine where it is the negative derivative of the throughput of the entire process with respect to the throughput of the constraining machine, and we could determine indeed the tight constraint is the throughput of the relevant machine by looking at the derivative which is significantly greater than all others.
Practical problems also often have a similar sparse structure to their constraining inputs too, but just because not every constraint is exactly zero except one doesn’t mean those non-zero constraints are secretly not actually constraining, or its unprincipled to use the same math or intuitions to reason about both situations.
For government roles, I think “years of experience” is definitely an important factor. But I don’t think you need to have been specializing for government roles specifically.
Especially for AI policy, there are several programs that are basically like “hey, if you have AI expertise but no background in policy, we want your help.” To be clear, these are often still fairly competitive, but I think it’s much more about being generally capable/competent and less about having optimized your resume for policy roles.