Isn’t it just the case that OpenPhil just generally doesn’t fund that many technical AI safety things these days? If you look at OP’s team on their website, they have only two technical AI safety grantmakers. Also, you list all the things OP doesn’t fund, but what are the things in technical AI safety that they do fund? Looking at their grants, it’s mostly MATS and METR and Apollo and FAR and some scattered academics I mostly haven’t heard of. It’s not that many things. I have the impression that the story is less like “OP is a major funder in technical AI safety, but unfortunately they blacklisted all the rationalist-adjacent orgs and people” and more like “AI safety is still a very small field, especially if you only count people outside the labs, and there are just not that many exciting funding opportunities, and OpenPhil is not actually a very big funder in the field”.
Open Phil is definitely by far the biggest funder in the field. I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn’t constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy).
AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.
Yeah, I agree, and I don’t know that much about OpenPhil’s policy work, and their fieldbuilding seems decent to me, though maybe not from you perspective. I just wanted to flag that many people (including myself until recently) overestimate how big a funder OP is in technical AI safety, and I think it’s important to flag that they actually have pretty limited scope in this area.
Yep, agree that this is a commonly overlooked aspect (and one that I think sadly has also contributed to the dominant force in AI Safety researchers becoming the labs, which I think has been quite sad).
Isn’t it just the case that OpenPhil just generally doesn’t fund that many technical AI safety things these days? If you look at OP’s team on their website, they have only two technical AI safety grantmakers. Also, you list all the things OP doesn’t fund, but what are the things in technical AI safety that they do fund? Looking at their grants, it’s mostly MATS and METR and Apollo and FAR and some scattered academics I mostly haven’t heard of. It’s not that many things. I have the impression that the story is less like “OP is a major funder in technical AI safety, but unfortunately they blacklisted all the rationalist-adjacent orgs and people” and more like “AI safety is still a very small field, especially if you only count people outside the labs, and there are just not that many exciting funding opportunities, and OpenPhil is not actually a very big funder in the field”.
A lot of OP’s funding to technical AI safety goes to people outside the main x-risk community (e.g. applications to Ajeya’s RFPs).
Open Phil is definitely by far the biggest funder in the field. I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn’t constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy).
AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.
Yeah, I agree, and I don’t know that much about OpenPhil’s policy work, and their fieldbuilding seems decent to me, though maybe not from you perspective. I just wanted to flag that many people (including myself until recently) overestimate how big a funder OP is in technical AI safety, and I think it’s important to flag that they actually have pretty limited scope in this area.
Yep, agree that this is a commonly overlooked aspect (and one that I think sadly has also contributed to the dominant force in AI Safety researchers becoming the labs, which I think has been quite sad).