I don’t have a witty, insightful, neutral-sounding way to say this. The grantmakers should let the money flow. There are thousands of talented young safety researchers with decent ideas and exceptional minds, but they probably can’t prove it to you. They only need one thing and it is money.
They will be 10x less productive in a big nonprofit and they certainly won’t find the next big breakthrough there.
(Meanwhile, there are becoming much better ways to make money that don’t involve any good deeds at all.)
My friends were a good deal sharper and more motivated at 18 than now at 25. None of them had any chance at getting grants back then, but they have an ok shot now. At 35, their resumes will be much better and their minds much duller. And it will be too late to shape AGI at all.
I can’t find a good LW voice for this point but I feel this is incredibly important. Managers will find all the big nonprofits and eat their gooey centers and leave behind empty husks. They will do this quickly, within a couple years of each nonprofit being founded. The founders themselves will not be spared. Look how the writing of Altman or Demis changed over the years.
The funding situation needs to change very much and very quickly. If a man has an idea just give him money and don’t ask questions. (No, I don’t mean me.)
I think I disagree. This is a bandit problem, and grantmakers have tried pulling that lever a bunch of times. There hasn’t been any field-changing research (yet). They knew it had a low chance of success so it’s not a big update. But it is a small update.
Probably the optimal move isn’t cutting early-career support entirely, but having a higher bar seems correct. There are other levers that are worth trying, and we don’t have the resources to try every lever.
Also there are more grifters now that the word is out, so the EV is also declining that way.
(I feel bad saying this as someone who benefited a lot from early-career financial support).
grantmakers have tried pulling that lever a bunch of times
What do you mean by this? I can think of lots of things that seem in some broad class of pulling some lever that kinda looks like this, but most of the ones I’m aware of fall greatly short of being an appropriate attempt to leverage smart young creative motivated would-be AGI alignment insight-havers. So the update should be much smaller (or there’s a bunch of stuff I’m not aware of).
The main thing I’m referring to are upskilling or career transition grants, especially from LTFF, in the last couple of years. I don’t have stats, I’m assuming there were a lot given out because I met a lot of people who had received them. Probably there were a bunch given out by the ftx future fund also.
Also when I did MATS, many of us got grants post-MATS to continue our research. Relatively little seems to have come of these.
How are they falling short?
(I sound negative about these grants but I’m not, and I do want more stuff like that to happen. If I were grantmaking I’d probably give many more of some kinds of safety research grant. But “If a man has an idea just give him money and don’t ask questions” isn’t the right kind of change imo).
upskilling or career transition grants, especially from LTFF, in the last couple of years
Interesting; I’m less aware of these.
How are they falling short?
I’ll answer as though I know what’s going on in various private processes, but I don’t, and therefore could easily be wrong. I assume some of these are sort of done somewhere, but not enough and not together enough.
Favor insightful critiques and orientations as much as constructive ideas. If you have a large search space and little traction, a half-plane of rejects is as or more valuable than a guessed point that you knew how to even generate.
Explicitly allow acceptance by trajectory of thinking, assessed by at least a year of low-bandwidth mentorship; deemphasize agenda-ish-ness.
For initial exploration periods, give longer commitments with less required outputs; something like at least 2 years. Explicitly allow continuation of support by trajectory.
Give a path forward for financial support for out of paradigm things. (The Vitalik fellowship, for example, probably does not qualify, as the professors, when I glanced at the list, seem unlikely to support this sort of work; but I could be wrong.)
Generally emphasize judgement of experienced AGI alignment researchers, and deemphasize judgement of grantmakers.
Explicitly asking for out of paradigm things.
Do a better job of connecting people. (This one is vague but important.)
(TBC, from my full perspective this is mostly a waste because AGI alignment is too hard; you want to instead put resources toward delaying AGI, trying to talk AGI-makers down, and strongly amplifying human intelligence + wisdom.)
I agree this would be a great program to run, but I want to call it a different lever to the one I was referring to.
The only thing I would change is that I think new researchers need to understand the purpose and value of past agent foundations research. I spent too long searching for novel ideas while I still misunderstood the main constraints of alignment. I expect you’d get a lot of wasted effort if you asked for out-of-paradigm ideas. Instead it might be better to ask for people to understand and build on past agent foundations research, then gradually move away if they see other pathways after having understood the constraints. Now I see my work as mostly about trying to run into constraints for the purpose of better understand them.
Maybe that wouldn’t help though, it’s really hard to make people see the constraints.
We agree this is a crucial lever, and we agree that the bar for funding has to be in some way “high”. I’m arguing for a bar that’s differently shaped. The set of “people established enough in AGI alignment that they get 5 [fund a person for 2 years and maybe more depending how things go in low-bandwidth mentorship, no questions asked] tokens” would hopefully include many people who understand that understanding constraints is key and that past research understood some constraints.
build on past agent foundations research
I don’t really agree with this. Why do you say this?
a lot of wasted effort if you asked for out-of-paradigm ideas.
I agree with this in isolation. I think some programs do state something about OOP ideas, and I agree that the statement itself does not come close to solving the problem.
(Also I’m confused about the discourse in this thread (which is fine), because I thought we were discussing “how / how much should grantmakers let the money flow”.)
would hopefully include many people who understand that understanding constraints is key and that past research understood some constraints.
Good point, I’m convinced by this.
build on past agent foundations research
I don’t really agree with this. Why do you say this?
That’s my guess at the level of engagement required to understand something. Maybe just because when I’ve tried to use or modify some research that I thought I understood, I always realise I didn’t understand it deeply enough. I’m probably anchoring too hard on my own experience here, other people often learn faster than me.
(Also I’m confused about the discourse in this thread (which is fine), because I thought we were discussing “how / how much should grantmakers let the money flow”.)
I was thinking “should grantmakers let the money flow to unknown young people who want a chance to prove themselves.”
That’s my guess at the level of engagement required to understand something. Maybe just because when I’ve tried to use or modify some research that I thought I understood, I always realise I didn’t understand it deeply enough. I’m probably anchoring too hard on my own experience here, other people often learn faster than me.
Hm. A couple things:
Existing AF research is rooted in core questions about alignment.
Existing AF research, pound for pound / word for word, and even idea for idea, is much more unnecessary stuff than necessary stuff. (Which is to be expected.)
Existing AF research is among the best sources of compute-traces of trying to figure some of this stuff out (next to perhaps some philosophy and some other math).
Empirically, most people who set out to stuff existing AF fail to get many of the deep lessons.
There’s a key dimension of: how much are you always asking for the context? E.g.: Why did this feel like a mainline question to investigate? If we understood this, what could we then do / understand? If we don’t understand this, are we doomed / how are we doomed? Are there ways around that? What’s the argument, more clearly?
It’s more important whether people are doing that, than whether / how exactly they engage with existing AF research.
If people are doing that, they’ll usually migrate away from playing with / extending existing AF, towards the more core (more difficult) problems.
I was thinking “should grantmakers let the money flow to unknown young people who want a chance to prove themselves.”
Ah ok you’re right that that was the original claim. I mentally autosteelmanned.
The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.
I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).
I would mostly advise people against making large career transitions on the basis of Open Phil funding, or if you do, I would be very conservative with it. Like, don’t quit your job because of a promise of 1 year of funding, because it is quite possible your second year will only be given conditional on you aligning with the political priorities of OP funders or OP reputational management, and career transitions usually take longer than a year. To be clear, I think it often makes sense to accept funding from almost anyone, but in the case of OP it is funding with unusually hard-to-notice strings attached that might bite you when you are particularly weak-willed or vulnerable.
Also, if OP staff tells you they will give you future grants, or guarantee you some kind of “exit grant” I would largely discount that, at least at the moment. This is true for many, if not most, funders, but my sense is people tend to be particularly miscalibrated for OP (who aren’t particularly more or less trustworthy in their forecasts than random foundations and philanthropists, but I do think often get perceived as much more).
Of course, different people’s risk appetite might differ, and mileage might vary, but if you can, I would try to negotiate for a 2-3 year grant, or find another funder to backstop you for another year or two, even if OP has said they would keep funding you, before pursuing some kind of substantial career pivot.
Regarding our career development and transition funding (CDTF) program:
The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don’t reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
(When funding independent research through this program, we sometimes explicitly clarify that we’re unlikely to renew by default).
Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.
Yeah, I was thinking of PhD programs as one of the most common longer-term grants.
Agree that it’s reasonable for a lot of this funding to be shorter, but also think that given the shifting funding landscape where most good research by my lights can no longer get funding, I would be quite hesitant for people to substantially sacrifice career capital in the hopes of getting funding later (or more concretely, I think it’s the right choice for people to choose a path where they end up with a lot of slack to think about what directions to pursue, instead of being particularly vulnerable to economic incentives while trying to orient towards the very high-stakes feeling and difficult to navigate existential risk reduction landscape, which tends to result in the best people predictably working for big capability companies).
This includes the constraints of “finding paid positions after <1 year”, where the set of organizations that have funding to sponsor good work is also very small these days (though I do think that has a decent chance of changing again within a year or two, so it’s not a crazy bet to make).
Given these recent shifts and the harsher economic incentives of transitioning into the space, I think it would make sense for people to negotiate with OP about getting longer grants than OP has historically granted (which I think aligns with what I think OP staff makes sense as well, based on conversations I’ve had).
conditional on you aligning with the political priorities of OP funders or OP reputational management
Do you mean something more expansive than “literally don’t pursue projects that are either conversative/Republican-coded or explicitly involved in expanding/enriching the Rationality community”? Which, to be clear, would be less-than-ideal if true, but should be talked about in more specific terms when giving advice to potential grant-receivers.
I get an overall vibe from many of the comments you’ve made recently about OP, both here and on the EA forum, that you believe in a rather broad sense they are acting to maximize their own reputation or whatever Dustin’s whims are that day (and, consequently, lying/obfuscating this in their public communications to spin these decisions the opposite way), but I don’t think[1] you have mentioned any specific details that go beyond their own dealings with Lightcone and with right-coded figures.
Yes, I do not believe OP funding constraints are well-described by either limitations on grants specifically to “rationality community” or “conservative/republican-coded activities”.
Just as an illustration, if you start thinking or directing your career towards making sure we don’t torture AI systems despite them maybe having moral value, that is also a domain where OP has withdrawn funding from. Same if you want to work on any wild animal or invertebrate suffering. I also know of multiple other grantees which do not straightforwardly fall into any domains that OP has announced they are withdrawing funding from that cannot receive funding.[1]
I think the best description for predicting what OP is avoiding funding right now, and will continue to avoid funding into the future is broadly “things that might make Dustin or OP look weird, and are not in a very small set of domains where OP is OK with taking reputational hits or defending people who want to be open about their beliefs, or might otherwise cost them political capital with potential allies (which includes but is not exclusive to the democratic party, AI capability companies, various US government departments, and a vague conception of the left-leaning intellectual elite)”.
This is not a perfect description because I do think there is a very messy principal agent problem going on with Good Ventures and Open Phil, where Open Phil staff would often like to make weirder grants, and GV wants to do less, and they are reputationally entwined, and the dynamics arising from that are something I definitely don’t understand in detail, but I think at a high level the description above will make better predictions than any list of domains.
Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it’s definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.
I agree with this, but I actually think the issues with Open Phil are substantially broader. As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don’t think this is because of any COIs, it’s because Dustin is very active in the democratic party and doesn’t want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.
Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]
Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth’s work, or Wei Dai’s work, or Daniel Kokotajlo’s work, or Brian Tomasik’s work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]
I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]
In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has “good judgement” on public comms, and who isn’t the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn’t like, or that might strain Dustin’s relationships with others in any non-trivial way.
Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren’t the kind of person who gets the hint that this is how the game is played now.
(Note that a bunch of well-informed people disagreed with at least sections of the above, like Buck from Redwood disagreeing that Redwood couldn’t get funding, so it might make sense to check out the original discussion)
I am using OP’s own language about “withdrawing funding”. However, as I say in a recent EA Forum comment, as Open Phil is ramping up the degree to which it is making recommendations to non-GV funders, and OP’s preferences come apart from the preferences of their funders, it might be a good idea to taboo the terms “OP funds X”, because it starts being confusing.
It sounds pretty implausible to me, intellectual productivity is usually at its peak from mid-20s to mid-30s(for high fluid-intelligence fields like math and physics)
Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein’s annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newton invented calculus at 24. Hmmm I guess this makes it seem more like early 20s − 30. Either way 25 is definitely in peak range, and 18 typically too young(although people have made great discoveries by 18, like Galois. But he likely would have been more productive later had he lived past 20)
Einstein started doing research a few years before he actually had his miracle year. If he started at 26, he might have never found anything. He went to physics school at 17 or 18. You can’t go to “AI safety school” at that age, but if you have funding then you can start learning on your own. It’s harder to learn than (eg) learning to code, but not impossibly hard.
I am not opposed to funding 25 or 30 or 35 or 40 year olds, but I expect that the most successful people got started in their field (or a very similar one) as a teenager. I wouldn’t expect funding an 18-year-old to pay off in less than 4 years. Sorry for being unclear on this in original post.
Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there’s way more high-quality knowledge in those fields. Although “just dive in to AI” seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯
This definitely differs for different folks. I was nowhere near my sharpest in late teens or early twenties. I think my peak was early 30s. Now in early 40s, I’m feeling somewhat less sharp, but still ahead of where I was at 18 (even setting aside crystalized knowledge).
I do generally agree though that this is a critical point in history, and we should have more people trying more research directions.
I don’t have a witty, insightful, neutral-sounding way to say this. The grantmakers should let the money flow. There are thousands of talented young safety researchers with decent ideas and exceptional minds, but they probably can’t prove it to you. They only need one thing and it is money.
They will be 10x less productive in a big nonprofit and they certainly won’t find the next big breakthrough there.
(Meanwhile, there are becoming much better ways to make money that don’t involve any good deeds at all.)
My friends were a good deal sharper and more motivated at 18 than now at 25. None of them had any chance at getting grants back then, but they have an ok shot now. At 35, their resumes will be much better and their minds much duller. And it will be too late to shape AGI at all.
I can’t find a good LW voice for this point but I feel this is incredibly important. Managers will find all the big nonprofits and eat their gooey centers and leave behind empty husks. They will do this quickly, within a couple years of each nonprofit being founded. The founders themselves will not be spared. Look how the writing of Altman or Demis changed over the years.
The funding situation needs to change very much and very quickly. If a man has an idea just give him money and don’t ask questions. (No, I don’t mean me.)
I think I disagree. This is a bandit problem, and grantmakers have tried pulling that lever a bunch of times. There hasn’t been any field-changing research (yet). They knew it had a low chance of success so it’s not a big update. But it is a small update.
Probably the optimal move isn’t cutting early-career support entirely, but having a higher bar seems correct. There are other levers that are worth trying, and we don’t have the resources to try every lever.
Also there are more grifters now that the word is out, so the EV is also declining that way.
(I feel bad saying this as someone who benefited a lot from early-career financial support).
What do you mean by this? I can think of lots of things that seem in some broad class of pulling some lever that kinda looks like this, but most of the ones I’m aware of fall greatly short of being an appropriate attempt to leverage smart young creative motivated would-be AGI alignment insight-havers. So the update should be much smaller (or there’s a bunch of stuff I’m not aware of).
The main thing I’m referring to are upskilling or career transition grants, especially from LTFF, in the last couple of years. I don’t have stats, I’m assuming there were a lot given out because I met a lot of people who had received them. Probably there were a bunch given out by the ftx future fund also.
Also when I did MATS, many of us got grants post-MATS to continue our research. Relatively little seems to have come of these.
How are they falling short?
(I sound negative about these grants but I’m not, and I do want more stuff like that to happen. If I were grantmaking I’d probably give many more of some kinds of safety research grant. But “If a man has an idea just give him money and don’t ask questions” isn’t the right kind of change imo).
Interesting; I’m less aware of these.
I’ll answer as though I know what’s going on in various private processes, but I don’t, and therefore could easily be wrong. I assume some of these are sort of done somewhere, but not enough and not together enough.
Favor insightful critiques and orientations as much as constructive ideas. If you have a large search space and little traction, a half-plane of rejects is as or more valuable than a guessed point that you knew how to even generate.
Explicitly allow acceptance by trajectory of thinking, assessed by at least a year of low-bandwidth mentorship; deemphasize agenda-ish-ness.
For initial exploration periods, give longer commitments with less required outputs; something like at least 2 years. Explicitly allow continuation of support by trajectory.
Give a path forward for financial support for out of paradigm things. (The Vitalik fellowship, for example, probably does not qualify, as the professors, when I glanced at the list, seem unlikely to support this sort of work; but I could be wrong.)
Generally emphasize judgement of experienced AGI alignment researchers, and deemphasize judgement of grantmakers.
Explicitly asking for out of paradigm things.
Do a better job of connecting people. (This one is vague but important.)
(TBC, from my full perspective this is mostly a waste because AGI alignment is too hard; you want to instead put resources toward delaying AGI, trying to talk AGI-makers down, and strongly amplifying human intelligence + wisdom.)
I agree this would be a great program to run, but I want to call it a different lever to the one I was referring to.
The only thing I would change is that I think new researchers need to understand the purpose and value of past agent foundations research. I spent too long searching for novel ideas while I still misunderstood the main constraints of alignment. I expect you’d get a lot of wasted effort if you asked for out-of-paradigm ideas. Instead it might be better to ask for people to understand and build on past agent foundations research, then gradually move away if they see other pathways after having understood the constraints. Now I see my work as mostly about trying to run into constraints for the purpose of better understand them.
Maybe that wouldn’t help though, it’s really hard to make people see the constraints.
We agree this is a crucial lever, and we agree that the bar for funding has to be in some way “high”. I’m arguing for a bar that’s differently shaped. The set of “people established enough in AGI alignment that they get 5 [fund a person for 2 years and maybe more depending how things go in low-bandwidth mentorship, no questions asked] tokens” would hopefully include many people who understand that understanding constraints is key and that past research understood some constraints.
I don’t really agree with this. Why do you say this?
I agree with this in isolation. I think some programs do state something about OOP ideas, and I agree that the statement itself does not come close to solving the problem.
(Also I’m confused about the discourse in this thread (which is fine), because I thought we were discussing “how / how much should grantmakers let the money flow”.)
Good point, I’m convinced by this.
That’s my guess at the level of engagement required to understand something. Maybe just because when I’ve tried to use or modify some research that I thought I understood, I always realise I didn’t understand it deeply enough. I’m probably anchoring too hard on my own experience here, other people often learn faster than me.
I was thinking “should grantmakers let the money flow to unknown young people who want a chance to prove themselves.”
Hm. A couple things:
Existing AF research is rooted in core questions about alignment.
Existing AF research, pound for pound / word for word, and even idea for idea, is much more unnecessary stuff than necessary stuff. (Which is to be expected.)
Existing AF research is among the best sources of compute-traces of trying to figure some of this stuff out (next to perhaps some philosophy and some other math).
Empirically, most people who set out to stuff existing AF fail to get many of the deep lessons.
There’s a key dimension of: how much are you always asking for the context? E.g.: Why did this feel like a mainline question to investigate? If we understood this, what could we then do / understand? If we don’t understand this, are we doomed / how are we doomed? Are there ways around that? What’s the argument, more clearly?
It’s more important whether people are doing that, than whether / how exactly they engage with existing AF research.
If people are doing that, they’ll usually migrate away from playing with / extending existing AF, towards the more core (more difficult) problems.
Ah ok you’re right that that was the original claim. I mentally autosteelmanned.
Just wanted to flag quickly that Open Philanthropy’s GCR Capacity Building team (where I work) has a career development and transition funding program.
The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.
I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).
I would mostly advise people against making large career transitions on the basis of Open Phil funding, or if you do, I would be very conservative with it. Like, don’t quit your job because of a promise of 1 year of funding, because it is quite possible your second year will only be given conditional on you aligning with the political priorities of OP funders or OP reputational management, and career transitions usually take longer than a year. To be clear, I think it often makes sense to accept funding from almost anyone, but in the case of OP it is funding with unusually hard-to-notice strings attached that might bite you when you are particularly weak-willed or vulnerable.
Also, if OP staff tells you they will give you future grants, or guarantee you some kind of “exit grant” I would largely discount that, at least at the moment. This is true for many, if not most, funders, but my sense is people tend to be particularly miscalibrated for OP (who aren’t particularly more or less trustworthy in their forecasts than random foundations and philanthropists, but I do think often get perceived as much more).
Of course, different people’s risk appetite might differ, and mileage might vary, but if you can, I would try to negotiate for a 2-3 year grant, or find another funder to backstop you for another year or two, even if OP has said they would keep funding you, before pursuing some kind of substantial career pivot.
Regarding our career development and transition funding (CDTF) program:
The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don’t reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
(When funding independent research through this program, we sometimes explicitly clarify that we’re unlikely to renew by default).
Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.
(I probably won’t have time to engage further.)
Yeah, I was thinking of PhD programs as one of the most common longer-term grants.
Agree that it’s reasonable for a lot of this funding to be shorter, but also think that given the shifting funding landscape where most good research by my lights can no longer get funding, I would be quite hesitant for people to substantially sacrifice career capital in the hopes of getting funding later (or more concretely, I think it’s the right choice for people to choose a path where they end up with a lot of slack to think about what directions to pursue, instead of being particularly vulnerable to economic incentives while trying to orient towards the very high-stakes feeling and difficult to navigate existential risk reduction landscape, which tends to result in the best people predictably working for big capability companies).
This includes the constraints of “finding paid positions after <1 year”, where the set of organizations that have funding to sponsor good work is also very small these days (though I do think that has a decent chance of changing again within a year or two, so it’s not a crazy bet to make).
Given these recent shifts and the harsher economic incentives of transitioning into the space, I think it would make sense for people to negotiate with OP about getting longer grants than OP has historically granted (which I think aligns with what I think OP staff makes sense as well, based on conversations I’ve had).
Do you mean something more expansive than “literally don’t pursue projects that are either conversative/Republican-coded or explicitly involved in expanding/enriching the Rationality community”? Which, to be clear, would be less-than-ideal if true, but should be talked about in more specific terms when giving advice to potential grant-receivers.
I get an overall vibe from many of the comments you’ve made recently about OP, both here and on the EA forum, that you believe in a rather broad sense they are acting to maximize their own reputation or whatever Dustin’s whims are that day (and, consequently, lying/obfuscating this in their public communications to spin these decisions the opposite way), but I don’t think[1] you have mentioned any specific details that go beyond their own dealings with Lightcone and with right-coded figures.
Could be a failure of my memory, ofc
Yes, I do not believe OP funding constraints are well-described by either limitations on grants specifically to “rationality community” or “conservative/republican-coded activities”.
Just as an illustration, if you start thinking or directing your career towards making sure we don’t torture AI systems despite them maybe having moral value, that is also a domain where OP has withdrawn funding from. Same if you want to work on any wild animal or invertebrate suffering. I also know of multiple other grantees which do not straightforwardly fall into any domains that OP has announced they are withdrawing funding from that cannot receive funding.[1]
I think the best description for predicting what OP is avoiding funding right now, and will continue to avoid funding into the future is broadly “things that might make Dustin or OP look weird, and are not in a very small set of domains where OP is OK with taking reputational hits or defending people who want to be open about their beliefs, or might otherwise cost them political capital with potential allies (which includes but is not exclusive to the democratic party, AI capability companies, various US government departments, and a vague conception of the left-leaning intellectual elite)”.
This is not a perfect description because I do think there is a very messy principal agent problem going on with Good Ventures and Open Phil, where Open Phil staff would often like to make weirder grants, and GV wants to do less, and they are reputationally entwined, and the dynamics arising from that are something I definitely don’t understand in detail, but I think at a high level the description above will make better predictions than any list of domains.
See also this other comment of mine: https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st
(Note that a bunch of well-informed people disagreed with at least sections of the above, like Buck from Redwood disagreeing that Redwood couldn’t get funding, so it might make sense to check out the original discussion)
I am using OP’s own language about “withdrawing funding”. However, as I say in a recent EA Forum comment, as Open Phil is ramping up the degree to which it is making recommendations to non-GV funders, and OP’s preferences come apart from the preferences of their funders, it might be a good idea to taboo the terms “OP funds X”, because it starts being confusing.
Can’t Dustin donate $100k anonymously (bitcoin or cash) to researchers in a way that decouples his reputation from the people he’s funding?
How do you tell that there were sharper back then?
It sounds pretty implausible to me, intellectual productivity is usually at its peak from mid-20s to mid-30s(for high fluid-intelligence fields like math and physics)
People asked for a citation so here’s one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe
Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein’s annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newton invented calculus at 24. Hmmm I guess this makes it seem more like early 20s − 30. Either way 25 is definitely in peak range, and 18 typically too young(although people have made great discoveries by 18, like Galois. But he likely would have been more productive later had he lived past 20)
Einstein started doing research a few years before he actually had his miracle year. If he started at 26, he might have never found anything. He went to physics school at 17 or 18. You can’t go to “AI safety school” at that age, but if you have funding then you can start learning on your own. It’s harder to learn than (eg) learning to code, but not impossibly hard.
I am not opposed to funding 25 or 30 or 35 or 40 year olds, but I expect that the most successful people got started in their field (or a very similar one) as a teenager. I wouldn’t expect funding an 18-year-old to pay off in less than 4 years. Sorry for being unclear on this in original post.
Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there’s way more high-quality knowledge in those fields. Although “just dive in to AI” seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯
This definitely differs for different folks. I was nowhere near my sharpest in late teens or early twenties. I think my peak was early 30s. Now in early 40s, I’m feeling somewhat less sharp, but still ahead of where I was at 18 (even setting aside crystalized knowledge).
I do generally agree though that this is a critical point in history, and we should have more people trying more research directions.
In general people should feel free to DM me with pitches for this sort of thing.
Perhaps say some words on why they might want to?
Because I might fund them or forward it to someone else who will.