I’m a guest fund manager for the LTFF, and wanted to say that my impression is that the LTFF is often pretty excited about giving people ~6 month grants to try out alignment research at 70% of their industry counterfactual pay (the reason for the 70% is basically to prevent grift). Then, the LTFF can give continued support if they seem to be doing well. If getting this funding would make you excited to switch into alignment research, I’d encourage you to apply.
I also think that there’s a lot of impactful stuff to do for AI existential safety that isn’t alignment research! For example, I’m quite into people doing strategy, policy outreach to relevant people in government, actually writing policy, capability evaluations, and leveraged community building like CBAI.
Sometimes, but the norm is to do 70%. This is mostly done on a case by case basis, but salient factors to me include:
Does the person need the money? (what cost of living place are they living in, do they have a family, etc)
What is the industry counterfactual? If someone would make 300k, we likely wouldn’t pay them 70%, while if their counterfactual was 50k, it feels more reasonable to pay them 100% (or even more).
Ah, thanks! LTFF was definitely on my list of things to apply for, I just wasn’t sure if that upskilling/trial period was still “a thing” these days. Very glad that it is!
Thanks for posting this—not OP, but I will likely apply come early June. If anyone else is associated with other grant opportunities, would love to hear about those as well.
I’m a guest fund manager for the LTFF, and wanted to say that my impression is that the LTFF is often pretty excited about giving people ~6 month grants to try out alignment research at 70% of their industry counterfactual pay (the reason for the 70% is basically to prevent grift). Then, the LTFF can give continued support if they seem to be doing well. If getting this funding would make you excited to switch into alignment research, I’d encourage you to apply.
I also think that there’s a lot of impactful stuff to do for AI existential safety that isn’t alignment research! For example, I’m quite into people doing strategy, policy outreach to relevant people in government, actually writing policy, capability evaluations, and leveraged community building like CBAI.
If the initial grant goes well, do you give funding at the market price for their labor?
Sometimes, but the norm is to do 70%. This is mostly done on a case by case basis, but salient factors to me include:
Does the person need the money? (what cost of living place are they living in, do they have a family, etc)
What is the industry counterfactual? If someone would make 300k, we likely wouldn’t pay them 70%, while if their counterfactual was 50k, it feels more reasonable to pay them 100% (or even more).
How good is the research?
Quite informative, thanks!
Ah, thanks! LTFF was definitely on my list of things to apply for, I just wasn’t sure if that upskilling/trial period was still “a thing” these days. Very glad that it is!
Thanks for posting this—not OP, but I will likely apply come early June. If anyone else is associated with other grant opportunities, would love to hear about those as well.