Building the AI Risk Research Community

Series: How to Purchase AI Risk Reduction

Yet another way to purchase reductions in AI risk may be to grow the AI risk research community.

The AI risk research community is pretty small. It currently consists of:

  • 4-ish AI risk researchers at the Singularity Institute. (Eliezer is helping to launch CFAR before he goes back to AI risk research. The AI risk research done at SI right now is: about 40% of Carl, 25% of me, plus large and small fractions of various remote researchers, most significantly about 90% of Kaj Sotala.)

  • 4-ish AI risk researchers at the Future of Humanity Institute: Nick Bostrom, Anders Sandberg, Stuart Armstrong, Vincent Mueller. (This number might be wrong. It seems that Nick and Stuart are working basically full-time on AI risk right now, but I’m not sure about Anders and Vincent. Also, FHI should be hiring someone shortly with the Tamas Research Fellowship money. Finally, note that FHI has a broader mission than AI risk, so while they are focusing on AI risk while Nick works on his Superintelligence book, they will probably return to other subjects sometime thereafter.)

  • 0.6-ish AI risk researchers at Leverage Research, maybe?

  • 0.2-ish AI risk researchers at GCRi, maybe?

  • Nobody yet at CSER, but maybe 1-2 people in the relatively near future?

  • Occasionally, something useful might come from mainstream machine ethics, but they mostly aren’t focused on problems of machine superintelligence (yet).

  • Small fractions of some people in the broader AI risk community, e.g. Ben Goertzel, David Chalmers, and Wei Dai.

Obviously, a larger AI risk research community could be more productive. (It could also grow to include more people but fail to do actually useful work, like so many academic disciplines. But there are ways to push such a small field in useful directions as it grows.)

So, how would one grow the AI risk research community? Here are some methods:

  1. Make it easier for AI risk researchers to do their work, by providing a well-organized platform of work from which they can build. Nick’s Superintelligence book will help with that. So would a scholarly AI risk wiki. So do sites like Existential-Risk.org, IntelligenceExplosion.com, Friendly-AI.com, Friendly AI Research, and SI’s Singularity FAQ. So does my AI Risk Bibliography 2012, and so do “basics” or “survey” articles like Artificial Intelligence as a Positive and Negative Factor in Global Risk, The Singularity: A Philosophical Analysis, and Intelligence Explosion: Evidence and Import. So do helpful lists like journals that may publish articles related to AI risk. (If you don’t think such things are useful, then you probably don’t know what it’s like to be a researcher trying to develop papers in the field. When I send my AI risk bibliography and my list of “journals that may publish articles related to AI risk” to AI risk researchers, I get back emails that say “thank you” with multiple exclamation points.)

  2. Run an annual conference for researchers, put out a call for papers, etc. This brings researchers together and creates a community. The AGI conference series did this for AGI. The new AGI Impacts sessions at AGI-12 could potentially be grown into an AGI Impacts conference series that would effectively be an AI Risk conference series.

  3. Maybe launch a journal for AI risk papers. Like a conference, this can to some degree bring the community closer. It can also provide a place to publish articles that don’t fit within the scope of any other existing journals. I say “maybe” on this one because it can be costly to run a journal well, and there are plenty of journals already that will publish papers on AI risk.

  4. Give out grants for AI risk research.

Here’s just one example of what SI is currently doing to help grow the AI risk research community.

Writing “Responses to Catastrophic AGI Risk”: A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.

Estimated final cost: $5,000 for Kaj’s time, $500 for other remote research, 30 hours of Luke’s time.

Now, here’s a list of things SI could be doing to help grow the AI risk research community:

  • Creating a scholarly AI risk wiki. Estimated cost: 1,920 hours of SI staff time (over two years), $384,000 for remote researchers and writers, and $30,000 for wiki design, development, and hosting costs.

  • Helping to grow the AGI Impacts sessions at AGI-12 into an AGI Impacts conference. (No cost estimate yet.)

  • Writing Open Problems in Friendly AI. Estimated cost: 2 months of Eliezer’s time, 250 hours of Luke’s time, $40,000 for internal and external researchers.

  • Writing more “basics” and “survey” articles on AI risk topics.

  • Giving out grants for AI risk research.