Room for more funding at the Future of Humanity Institute
In case you didn’t already know: The Future of Humanity Institute, one of the three organizations co-sponsoring LW, is a group within the University of Oxford’s philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk.
I’ve been casually corresponding with the FHI in an effort to learn more about the different options available for purchasing existential risk reduction. Here’s a summary of what I’ve learned from research fellow Stuart Armstrong and academic project manager Sean O’Heigeartaigh:
Sean reports that since this SIAI/FHI achievements comparison, FHI’s full-time research team has expanded to 7, the biggest it’s ever been. Sean writes: “Our output has improved dramatically by all tangible metrics (academic papers, outreach, policy impact, etc) to match this.”
Despite this, Sean writes, “we’re not nearly at the capacity we’d like to reach. There are a number of research areas in which we would very like to expand (more machine intelligence work, synthetic biology risks, surveillance/information society work) and in which we feel that we could make a major impact. There are also quite a number of talented researchers over the past year who we haven’t been able to employ but would dearly like to.”
They’d also like to do more public outreach, but standard academic funding routes aren’t likely to cover this. So without funding from individuals, it’s much less likely to happen.
Sean is currently working overtime to cover a missing administrative staff member, but he plans to release a new achievement report (see sidebar on this page for past achievement reports) sometime in the next few months.
Although the FHI has traditionally pursued standard academic funding channels, donations from individuals (small and large) are more than welcome. (Stuart says this can’t be emphasized enough.)
Stuart reports current academic funding opportunities are “a bit iffy, with some possible hopes”.
Sean is more optimistic than Stuart regarding near-term funding prospects, although he does mention that both Stuart and Anders Sandberg are currently being covered by FHI’s “non-assigned” funding until grants for them can be secured.
Although neither Stuart nor Sean mentions this, I assume that one reason individual donations can be especially valuable is if they free FHI researchers up from writing grant proposals so they can spend more time doing actual research.
Interesting comment by lukeprog describing the comparative advantages of SIAI and FHI.
- The Future of Humanity Institute could make use of your money by 26 Sep 2014 22:53 UTC; 78 points) (
- 27 Dec 2013 7:19 UTC; 33 points) 's comment on Donating to MIRI vs. FHI vs. CEA vs. CFAR by (
- 2 Jul 2013 23:16 UTC; 21 points) 's comment on Open Thread, July 1-15, 2013 by (
Slightly off topic, but I’m very interested in the “policy impact” that FHI has had—I had heard nothing about it before and assumed that it wasn’t having very much. Do you have more information on that? If it were significant, it would increase the odds that giving to FHI was a great option.
We get to talk to government and military people quite a bit, attending seminars and giving them presentations, and they nod wisely and ask pertinent questions which we answer. We’re not sure how much this has translated into actual policy differences at the end of the day, but there does seem to be a class of people in government willing to listen to these ideas (informally, it seems that the military is more interested than the standard civil servants and politicians).
There are other policy achievements, but Nick and Anders would know more...
I happened to see this on the FHI website, but don’t know of anything beyond that.
Unfortunately, the impact of information is often too closely tied to the funding poured into its propagation. Look at the way American media networks are basically billboards for the rich
What kinds of things do you have in mind—does there seem to be a clear bias in favor of the interests of the rich over those of the poor?
I agree that FHI’s room for more funding (on small scales) is significantly higher this year than in the last few years, when I would have said it was relatively more immediately limited by staff supply.
Another donation opportunity came up recently, which I responded to with a big long list of questions and I’ll put the answers up when I get them. People seemed to like this approach—can we do something similar for the FHI?
Some thoughts:
Are the people at FHI going to be too busy to answer this kind of stuff?
Are they likely to be limited in how candid they can be with their answers if the answers are going to be made public?
I’m guessing Stuart or Sean would be the people you’d recommend talking to?
I’ll give a small attempt at answering some of the questions (I know little from the financial side, alas)
Hire more people, put current people on longer contracts, not have current people writing grant applications or slavishly following the requirements of the grants they currently are on (which would probably mean an increased AI-risk focus)
No idea.
Generally interchangeable; regular donations with a definite timeline (“I will donate at least until the date of XXXX”) are equivalent with a particular lump sum. Regular donations with no commitment are a bit more iffy, as they increase uncertainty.
We have some people in mind (eg Nick Beckstead, for instance). In general, the FHI gets far more high quality applications than we have places, so we can select from the best.
Others have commented on this; I’ll simply point out that the FHI is in a totally unique position, being a university research institute with some political contacts. No other org could easily replicate this.
Not much. We have “giving what we can” on staff, and interact with SIAI and (to some extent) with 80KH, but the FHI tends to interact with specific individuals (Robin Hanson, Eric Drexler, Milan Circovic...) rather than organisations.
The University probably has a policy on that.
I don’t know this side of things, sorry!
Not really relevant to the FHI—our two main things are academic research (with all the presenting, publishing and so on that that entails) and outreach/policy efforts. We won’t be giving up either any time soon.
We’re rather conventional in organisation: a standard university research institute, people pursuing their own projects and meeting to coordinate and exchange a lot of ideas.
We want to move downstairs in our building. This isn’t really relevant to donors.
This is more a question for Sean or Nick. Being part of the university, we follow their criteria for transparency, and a lot of the self-evaluation is based on other-evaluation: seeing how much of our papers are accepted, the attendance at conferences, and similar.
Questions 14-16 not relevant to the FHI.
Hope this brief answer helps!
Oh wow, totally wasn’t expecting you to go ahead and answer that particular list of questions. Thanks for being so proactive!
Questions 7-11 aren’t really relevant to FHI. Question 16 is relevant (at least the the “are there other orgs similar to you?” part) but I’m guessing you’d answer no to that?
The other answers are helpful, thanks!
Other orgs similar: SIA, the group in Cambridge which may be founded, some governmental and corporate future-predicting think tanks. But none of them are really that similar.
Always! But we can try...
Probably not. We don’t have any interesting secrets.
Sean is even more busy than me, so I’d recommend—gulp—talking to me.
Correct.
In fact, it might be the best use of additional funding for FHI. An additional 10 hours of x-risk work (rather than grant-writing) from Nick Bostrom or Stuart Armstrong is hard to beat in terms of x-risk reduction purchased per dollar. (The other ‘core’ FHI researcher, Anders Sandberg, seems to do x-risk work less frequently than Nick and Stuart.)
Similarly, at SI we’re always trying to find ways to spend money to “free up” Eliezer, since additional hours of x-risk work from Eliezer are also very hard to beat in terms of x-risk reduction purchased per dollar.
Is he still teaching a bayes class at minicamps?
I think he has now been replaced in that role. He is spending this week working intensively on one of the “FAI open problems” with a group of visitors every day, and there is a minicamp going on right now.
Yes yes yes! :-)
Words can only begin to describe how much time and energy even small grant applications can suck from you.