The main thing I’m referring to are upskilling or career transition grants, especially from LTFF, in the last couple of years. I don’t have stats, I’m assuming there were a lot given out because I met a lot of people who had received them. Probably there were a bunch given out by the ftx future fund also.
Also when I did MATS, many of us got grants post-MATS to continue our research. Relatively little seems to have come of these.
How are they falling short?
(I sound negative about these grants but I’m not, and I do want more stuff like that to happen. If I were grantmaking I’d probably give many more of some kinds of safety research grant. But “If a man has an idea just give him money and don’t ask questions” isn’t the right kind of change imo).
upskilling or career transition grants, especially from LTFF, in the last couple of years
Interesting; I’m less aware of these.
How are they falling short?
I’ll answer as though I know what’s going on in various private processes, but I don’t, and therefore could easily be wrong. I assume some of these are sort of done somewhere, but not enough and not together enough.
Favor insightful critiques and orientations as much as constructive ideas. If you have a large search space and little traction, a half-plane of rejects is as or more valuable than a guessed point that you knew how to even generate.
Explicitly allow acceptance by trajectory of thinking, assessed by at least a year of low-bandwidth mentorship; deemphasize agenda-ish-ness.
For initial exploration periods, give longer commitments with less required outputs; something like at least 2 years. Explicitly allow continuation of support by trajectory.
Give a path forward for financial support for out of paradigm things. (The Vitalik fellowship, for example, probably does not qualify, as the professors, when I glanced at the list, seem unlikely to support this sort of work; but I could be wrong.)
Generally emphasize judgement of experienced AGI alignment researchers, and deemphasize judgement of grantmakers.
Explicitly asking for out of paradigm things.
Do a better job of connecting people. (This one is vague but important.)
(TBC, from my full perspective this is mostly a waste because AGI alignment is too hard; you want to instead put resources toward delaying AGI, trying to talk AGI-makers down, and strongly amplifying human intelligence + wisdom.)
The main thing I’m referring to are upskilling or career transition grants, especially from LTFF, in the last couple of years. I don’t have stats, I’m assuming there were a lot given out because I met a lot of people who had received them. Probably there were a bunch given out by the ftx future fund also.
Also when I did MATS, many of us got grants post-MATS to continue our research. Relatively little seems to have come of these.
How are they falling short?
(I sound negative about these grants but I’m not, and I do want more stuff like that to happen. If I were grantmaking I’d probably give many more of some kinds of safety research grant. But “If a man has an idea just give him money and don’t ask questions” isn’t the right kind of change imo).
Interesting; I’m less aware of these.
I’ll answer as though I know what’s going on in various private processes, but I don’t, and therefore could easily be wrong. I assume some of these are sort of done somewhere, but not enough and not together enough.
Favor insightful critiques and orientations as much as constructive ideas. If you have a large search space and little traction, a half-plane of rejects is as or more valuable than a guessed point that you knew how to even generate.
Explicitly allow acceptance by trajectory of thinking, assessed by at least a year of low-bandwidth mentorship; deemphasize agenda-ish-ness.
For initial exploration periods, give longer commitments with less required outputs; something like at least 2 years. Explicitly allow continuation of support by trajectory.
Give a path forward for financial support for out of paradigm things. (The Vitalik fellowship, for example, probably does not qualify, as the professors, when I glanced at the list, seem unlikely to support this sort of work; but I could be wrong.)
Generally emphasize judgement of experienced AGI alignment researchers, and deemphasize judgement of grantmakers.
Explicitly asking for out of paradigm things.
Do a better job of connecting people. (This one is vague but important.)
(TBC, from my full perspective this is mostly a waste because AGI alignment is too hard; you want to instead put resources toward delaying AGI, trying to talk AGI-makers down, and strongly amplifying human intelligence + wisdom.)