Yep. Check out the MIRI top donors list to put the amount in perspective.
The survey indicates that LW has nontrivial experience with academia: 7% of LW has a PhD and 9.9% do academic computer science. I wonder if it’d be useful to create an “awarding effective grants” repository type thread on LW, to pool thoughts on how grant money can be promoted and awarded to effectively achieve research goals. For example, my understanding is that there is a skill called “grantwriting” that is not the same as research ability that makes it easier to be awarded grants; I assume one would want to control for grantwriting ability if one wanted to hand out grants with maximum effectiveness. I don’t have much practical experience with academia though… maybe someone who does could frame the problem better and go ahead and create the thread? (Or alternatively tell me why this thread is a bad idea. For example, maybe grantwriting skill consists mostly of knowing what the institutions that typically hand out grants like to see, and FLI is an atypical institution.)
An example of the kind of question we could discuss in such a thread: would it be a good idea for grant proposals to be posted for public commentary on FLI’s website, to help them better evaluate grants and spur idea sharing on AI risk reduction in general?
I think that this is almost as much money as has gone into AI existential risk research to all organizations ever.
Yep. Check out the MIRI top donors list to put the amount in perspective.
The survey indicates that LW has nontrivial experience with academia: 7% of LW has a PhD and 9.9% do academic computer science. I wonder if it’d be useful to create an “awarding effective grants” repository type thread on LW, to pool thoughts on how grant money can be promoted and awarded to effectively achieve research goals. For example, my understanding is that there is a skill called “grantwriting” that is not the same as research ability that makes it easier to be awarded grants; I assume one would want to control for grantwriting ability if one wanted to hand out grants with maximum effectiveness. I don’t have much practical experience with academia though… maybe someone who does could frame the problem better and go ahead and create the thread? (Or alternatively tell me why this thread is a bad idea. For example, maybe grantwriting skill consists mostly of knowing what the institutions that typically hand out grants like to see, and FLI is an atypical institution.)
An example of the kind of question we could discuss in such a thread: would it be a good idea for grant proposals to be posted for public commentary on FLI’s website, to help them better evaluate grants and spur idea sharing on AI risk reduction in general?
Edit: Here’s the thread I created.