This argument has been had before on lesswrong. Usually the counter here is that we don’t actually know ahead of time who the top 20 people are, and so need to experiment & would do well to hedge our bets, which is the main constraint to getting a top 20. Currently we do this but only really do it for 1-2 years, but historically it actually takes more like 5 years to reveal yourself as a top 20, and I’d guess it actually actually can take more like 10 years.
So why not that funding model? Mostly a money thing.
I expect you will argue that in fact revealing yourself as a top 20 happens in fewer than 5 years, if you do argue.
Hmm, I really just mean that “labor” is probably the most important input to the current production function. I don’t want to make a claim that there aren’t better ways of doing things.
Ok, but when we ask why this constraint is tight, the answer is because there’s not enough funding. We can’t just increase the size of the field 10x in order to get 10x more top-20 researchers, because we don’t have the money for that.
For example, suppose MATS suddenly & magically scaled up 10x, and their next cohort was 1,000 people. Would this dramatically change the state of the field? I don’t think so.
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
I do think so, especially if they also increased/decentralized more their grantmaking capacity, and perhaps increased the field-building capacity earlier in the pipeline (e.g. AGISF, ML4G, etc., though I expect those programs to mostly be doing differentially quite well and not to be the main bottlenecks).
So why not that funding model? Mostly a money thing.
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).
This argument has been had before on lesswrong. Usually the counter here is that we don’t actually know ahead of time who the top 20 people are, and so need to experiment & would do well to hedge our bets, which is the main constraint to getting a top 20. Currently we do this but only really do it for 1-2 years, but historically it actually takes more like 5 years to reveal yourself as a top 20, and I’d guess it actually actually can take more like 10 years.
So why not that funding model? Mostly a money thing.
I expect you will argue that in fact revealing yourself as a top 20 happens in fewer than 5 years, if you do argue.
Hmm, I really just mean that “labor” is probably the most important input to the current production function. I don’t want to make a claim that there aren’t better ways of doing things.
Ok, but when we ask why this constraint is tight, the answer is because there’s not enough funding. We can’t just increase the size of the field 10x in order to get 10x more top-20 researchers, because we don’t have the money for that.
For example, suppose MATS suddenly & magically scaled up 10x, and their next cohort was 1,000 people. Would this dramatically change the state of the field? I don’t think so.
Now suppose SFF & LTFF’s budget suddenly & magically scaled up 10x. Would this dramatically change the state of the field? I think so!
I do think so, especially if they also increased/decentralized more their grantmaking capacity, and perhaps increased the field-building capacity earlier in the pipeline (e.g. AGISF, ML4G, etc., though I expect those programs to mostly be doing differentially quite well and not to be the main bottlenecks).
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).