The details depend on how you count the methodology/general existential risks stuff, e.g. the “probing the improbable” paper by Ord, Sandberg, and Hillerbrand. Also note that many of Bostrom’s and Sandberg’s publications, including the catastrophic risks book, and events like the Winter Intelligence Conference benefit from help by other FHI staff. Still, some hires have definitely done essentially no existential risk-relevant work. My guess is something like 1 Sandberg or Ord equivalent per 2-3 hires (with differential attrition leading to accumulation of the good).
Also, given earmarked funding they can create positions specifically for machine intelligence issues, the results of which are easier to track (the output of that person).
The details depend on how you count the methodology/general existential risks stuff, e.g. the “probing the improbable” paper by Ord, Sandberg, and Hillerbrand. Also note that many of Bostrom’s and Sandberg’s publications, including the catastrophic risks book, and events like the Winter Intelligence Conference benefit from help by other FHI staff. Still, some hires have definitely done essentially no existential risk-relevant work. My guess is something like 1 Sandberg or Ord equivalent per 2-3 hires (with differential attrition leading to accumulation of the good).
Also, given earmarked funding they can create positions specifically for machine intelligence issues, the results of which are easier to track (the output of that person).
But presumably that would only be a consideration if FHI received very large amounts of such earmarked funding?
$200k USD for one postdoc. One could save up for that with a donor-advised fund alone or with others, or use something like kickstarter.com.