If someone wants to become a grantmaker (perhaps with an AI risk focus) for an organization like LTFF, what do you think they should be doing to increase their odds of success?
IMO a good candidate is anything that is object-level useful for X-risk mitigation. E.g. technical alignment work, AI governance / policy work, biosecurity, etc.
If someone wants to become a grantmaker (perhaps with an AI risk focus) for an organization like LTFF, what do you think they should be doing to increase their odds of success?
By “success” do you mean “success at being hired as a grantmaker” or “success at doing a good job as a grantmaker?”
Success at being hired as a grantmaker.
IMO a good candidate is anything that is object-level useful for X-risk mitigation. E.g. technical alignment work, AI governance / policy work, biosecurity, etc.