Open Phil is definitely by far the biggest funder in the field. I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn’t constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy).
AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.
Yeah, I agree, and I don’t know that much about OpenPhil’s policy work, and their fieldbuilding seems decent to me, though maybe not from you perspective. I just wanted to flag that many people (including myself until recently) overestimate how big a funder OP is in technical AI safety, and I think it’s important to flag that they actually have pretty limited scope in this area.
Yep, agree that this is a commonly overlooked aspect (and one that I think sadly has also contributed to the dominant force in AI Safety researchers becoming the labs, which I think has been quite sad).
Open Phil is definitely by far the biggest funder in the field. I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn’t constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy).
AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.
Yeah, I agree, and I don’t know that much about OpenPhil’s policy work, and their fieldbuilding seems decent to me, though maybe not from you perspective. I just wanted to flag that many people (including myself until recently) overestimate how big a funder OP is in technical AI safety, and I think it’s important to flag that they actually have pretty limited scope in this area.
Yep, agree that this is a commonly overlooked aspect (and one that I think sadly has also contributed to the dominant force in AI Safety researchers becoming the labs, which I think has been quite sad).