The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.
I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).
I would mostly advise people against making large career transitions on the basis of Open Phil funding, or if you do, I would be very conservative with it. Like, don’t quit your job because of a promise of 1 year of funding, because it is quite possible your second year will only be given conditional on you aligning with the political priorities of OP funders or OP reputational management, and career transitions usually take longer than a year. To be clear, I think it often makes sense to accept funding from almost anyone, but in the case of OP it is funding with unusually hard-to-notice strings attached that might bite you when you are particularly weak-willed or vulnerable.
Also, if OP staff tells you they will give you future grants, or guarantee you some kind of “exit grant” I would largely discount that, at least at the moment. This is true for many, if not most, funders, but my sense is people tend to be particularly miscalibrated for OP (who aren’t particularly more or less trustworthy in their forecasts than random foundations and philanthropists, but I do think often get perceived as much more).
Of course, different people’s risk appetite might differ, and mileage might vary, but if you can, I would try to negotiate for a 2-3 year grant, or find another funder to backstop you for another year or two, even if OP has said they would keep funding you, before pursuing some kind of substantial career pivot.
Regarding our career development and transition funding (CDTF) program:
The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don’t reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
(When funding independent research through this program, we sometimes explicitly clarify that we’re unlikely to renew by default).
Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.
Yeah, I was thinking of PhD programs as one of the most common longer-term grants.
Agree that it’s reasonable for a lot of this funding to be shorter, but also think that given the shifting funding landscape where most good research by my lights can no longer get funding, I would be quite hesitant for people to substantially sacrifice career capital in the hopes of getting funding later (or more concretely, I think it’s the right choice for people to choose a path where they end up with a lot of slack to think about what directions to pursue, instead of being particularly vulnerable to economic incentives while trying to orient towards the very high-stakes feeling and difficult to navigate existential risk reduction landscape, which tends to result in the best people predictably working for big capability companies).
This includes the constraints of “finding paid positions after <1 year”, where the set of organizations that have funding to sponsor good work is also very small these days (though I do think that has a decent chance of changing again within a year or two, so it’s not a crazy bet to make).
Given these recent shifts and the harsher economic incentives of transitioning into the space, I think it would make sense for people to negotiate with OP about getting longer grants than OP has historically granted (which I think aligns with what I think OP staff makes sense as well, based on conversations I’ve had).
conditional on you aligning with the political priorities of OP funders or OP reputational management
Do you mean something more expansive than “literally don’t pursue projects that are either conversative/Republican-coded or explicitly involved in expanding/enriching the Rationality community”? Which, to be clear, would be less-than-ideal if true, but should be talked about in more specific terms when giving advice to potential grant-receivers.
I get an overall vibe from many of the comments you’ve made recently about OP, both here and on the EA forum, that you believe in a rather broad sense they are acting to maximize their own reputation or whatever Dustin’s whims are that day (and, consequently, lying/obfuscating this in their public communications to spin these decisions the opposite way), but I don’t think[1] you have mentioned any specific details that go beyond their own dealings with Lightcone and with right-coded figures.
Yes, I do not believe OP funding constraints are well-described by either limitations on grants specifically to “rationality community” or “conservative/republican-coded activities”.
Just as an illustration, if you start thinking or directing your career towards making sure we don’t torture AI systems despite them maybe having moral value, that is also a domain where OP has withdrawn funding from. Same if you want to work on any wild animal or invertebrate suffering. I also know of multiple other grantees which do not straightforwardly fall into any domains that OP has announced they are withdrawing funding from that cannot receive funding.[1]
I think the best description for predicting what OP is avoiding funding right now, and will continue to avoid funding into the future is broadly “things that might make Dustin or OP look weird, and are not in a very small set of domains where OP is OK with taking reputational hits or defending people who want to be open about their beliefs, or might otherwise cost them political capital with potential allies (which includes but is not exclusive to the democratic party, AI capability companies, various US government departments, and a vague conception of the left-leaning intellectual elite)”.
This is not a perfect description because I do think there is a very messy principal agent problem going on with Good Ventures and Open Phil, where Open Phil staff would often like to make weirder grants, and GV wants to do less, and they are reputationally entwined, and the dynamics arising from that are something I definitely don’t understand in detail, but I think at a high level the description above will make better predictions than any list of domains.
Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it’s definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.
I agree with this, but I actually think the issues with Open Phil are substantially broader. As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don’t think this is because of any COIs, it’s because Dustin is very active in the democratic party and doesn’t want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.
Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]
Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth’s work, or Wei Dai’s work, or Daniel Kokotajlo’s work, or Brian Tomasik’s work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]
I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]
In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has “good judgement” on public comms, and who isn’t the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn’t like, or that might strain Dustin’s relationships with others in any non-trivial way.
Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren’t the kind of person who gets the hint that this is how the game is played now.
(Note that a bunch of well-informed people disagreed with at least sections of the above, like Buck from Redwood disagreeing that Redwood couldn’t get funding, so it might make sense to check out the original discussion)
I am using OP’s own language about “withdrawing funding”. However, as I say in a recent EA Forum comment, as Open Phil is ramping up the degree to which it is making recommendations to non-GV funders, and OP’s preferences come apart from the preferences of their funders, it might be a good idea to taboo the terms “OP funds X”, because it starts being confusing.
Just wanted to flag quickly that Open Philanthropy’s GCR Capacity Building team (where I work) has a career development and transition funding program.
The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.
I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).
I would mostly advise people against making large career transitions on the basis of Open Phil funding, or if you do, I would be very conservative with it. Like, don’t quit your job because of a promise of 1 year of funding, because it is quite possible your second year will only be given conditional on you aligning with the political priorities of OP funders or OP reputational management, and career transitions usually take longer than a year. To be clear, I think it often makes sense to accept funding from almost anyone, but in the case of OP it is funding with unusually hard-to-notice strings attached that might bite you when you are particularly weak-willed or vulnerable.
Also, if OP staff tells you they will give you future grants, or guarantee you some kind of “exit grant” I would largely discount that, at least at the moment. This is true for many, if not most, funders, but my sense is people tend to be particularly miscalibrated for OP (who aren’t particularly more or less trustworthy in their forecasts than random foundations and philanthropists, but I do think often get perceived as much more).
Of course, different people’s risk appetite might differ, and mileage might vary, but if you can, I would try to negotiate for a 2-3 year grant, or find another funder to backstop you for another year or two, even if OP has said they would keep funding you, before pursuing some kind of substantial career pivot.
Regarding our career development and transition funding (CDTF) program:
The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don’t reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
(When funding independent research through this program, we sometimes explicitly clarify that we’re unlikely to renew by default).
Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.
(I probably won’t have time to engage further.)
Yeah, I was thinking of PhD programs as one of the most common longer-term grants.
Agree that it’s reasonable for a lot of this funding to be shorter, but also think that given the shifting funding landscape where most good research by my lights can no longer get funding, I would be quite hesitant for people to substantially sacrifice career capital in the hopes of getting funding later (or more concretely, I think it’s the right choice for people to choose a path where they end up with a lot of slack to think about what directions to pursue, instead of being particularly vulnerable to economic incentives while trying to orient towards the very high-stakes feeling and difficult to navigate existential risk reduction landscape, which tends to result in the best people predictably working for big capability companies).
This includes the constraints of “finding paid positions after <1 year”, where the set of organizations that have funding to sponsor good work is also very small these days (though I do think that has a decent chance of changing again within a year or two, so it’s not a crazy bet to make).
Given these recent shifts and the harsher economic incentives of transitioning into the space, I think it would make sense for people to negotiate with OP about getting longer grants than OP has historically granted (which I think aligns with what I think OP staff makes sense as well, based on conversations I’ve had).
Do you mean something more expansive than “literally don’t pursue projects that are either conversative/Republican-coded or explicitly involved in expanding/enriching the Rationality community”? Which, to be clear, would be less-than-ideal if true, but should be talked about in more specific terms when giving advice to potential grant-receivers.
I get an overall vibe from many of the comments you’ve made recently about OP, both here and on the EA forum, that you believe in a rather broad sense they are acting to maximize their own reputation or whatever Dustin’s whims are that day (and, consequently, lying/obfuscating this in their public communications to spin these decisions the opposite way), but I don’t think[1] you have mentioned any specific details that go beyond their own dealings with Lightcone and with right-coded figures.
Could be a failure of my memory, ofc
Yes, I do not believe OP funding constraints are well-described by either limitations on grants specifically to “rationality community” or “conservative/republican-coded activities”.
Just as an illustration, if you start thinking or directing your career towards making sure we don’t torture AI systems despite them maybe having moral value, that is also a domain where OP has withdrawn funding from. Same if you want to work on any wild animal or invertebrate suffering. I also know of multiple other grantees which do not straightforwardly fall into any domains that OP has announced they are withdrawing funding from that cannot receive funding.[1]
I think the best description for predicting what OP is avoiding funding right now, and will continue to avoid funding into the future is broadly “things that might make Dustin or OP look weird, and are not in a very small set of domains where OP is OK with taking reputational hits or defending people who want to be open about their beliefs, or might otherwise cost them political capital with potential allies (which includes but is not exclusive to the democratic party, AI capability companies, various US government departments, and a vague conception of the left-leaning intellectual elite)”.
This is not a perfect description because I do think there is a very messy principal agent problem going on with Good Ventures and Open Phil, where Open Phil staff would often like to make weirder grants, and GV wants to do less, and they are reputationally entwined, and the dynamics arising from that are something I definitely don’t understand in detail, but I think at a high level the description above will make better predictions than any list of domains.
See also this other comment of mine: https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st
(Note that a bunch of well-informed people disagreed with at least sections of the above, like Buck from Redwood disagreeing that Redwood couldn’t get funding, so it might make sense to check out the original discussion)
I am using OP’s own language about “withdrawing funding”. However, as I say in a recent EA Forum comment, as Open Phil is ramping up the degree to which it is making recommendations to non-GV funders, and OP’s preferences come apart from the preferences of their funders, it might be a good idea to taboo the terms “OP funds X”, because it starts being confusing.
Can’t Dustin donate $100k anonymously (bitcoin or cash) to researchers in a way that decouples his reputation from the people he’s funding?