How to Purchase AI Risk Reduction
I’m writing a series of discussion posts on how to purchase AI risk reduction (through donations to the Singularity Institute, anyway; other x-risk organizations will have to speak for themselves about their plans).
Each post outlines a concrete proposal, with cost estimates:
Also see John Maxwell’s Brainstorming additional AI risk reduction ideas.
(For a quick primer on AI risk, see Facing the Singularity.)
- Reply to Holden on ‘Tool AI’ by Jun 12, 2012, 6:00 PM; 152 points) (
- Reply to Holden on The Singularity Institute by Jul 10, 2012, 11:20 PM; 69 points) (
- 2012 Winter Fundraiser for the Singularity Institute by Dec 6, 2012, 10:41 PM; 48 points) (
- Building toward a Friendly AI team by Jun 6, 2012, 6:57 PM; 39 points) (
- Funding Good Research by May 27, 2012, 6:41 AM; 38 points) (
- Short Primers on Crucial Topics by May 31, 2012, 12:46 AM; 35 points) (
- Proposal for “Open Problems in Friendly AI” by Jun 1, 2012, 2:06 AM; 33 points) (
- Jun 12, 2012, 3:29 PM; 29 points) 's comment on Reply to Holden on ‘Tool AI’ by (
- A Scholarly AI Risk Wiki by May 25, 2012, 8:53 PM; 28 points) (
- Jun 12, 2012, 4:20 AM; 27 points) 's comment on Reply to Holden on ‘Tool AI’ by (
- Building the AI Risk Research Community by Jun 1, 2012, 2:13 AM; 26 points) (
- Raising safety-consciousness among AGI researchers by Jun 2, 2012, 9:39 PM; 21 points) (
- SI’s Summer 2012 Matching Drive Ends July 31st by Jul 20, 2012, 5:48 AM; 19 points) (
- Strategic research on AI risk by Jun 6, 2012, 5:02 PM; 13 points) (
- Reaching young math/compsci talent by Jun 2, 2012, 9:07 PM; 10 points) (
- Jun 28, 2012, 12:33 PM; 6 points) 's comment on Backward Reasoning Over Decision Trees by (
- Jun 1, 2012, 3:15 AM; 5 points) 's comment on Proposal for “Open Problems in Friendly AI” by (
- Jun 21, 2012, 7:45 AM; 4 points) 's comment on Help me make SI’s arguments clear by (
- Jul 12, 2012, 3:23 PM; 2 points) 's comment on Reply to Holden on The Singularity Institute by (
- Jul 11, 2012, 8:04 PM; 1 point) 's comment on Reply to Holden on The Singularity Institute by (
Your link to Facing the Singularity and the link embedded in the picture both redirect to this page.
Both links work fine for me.
I fixed them shortly after Dorikka posted.
What I don’t see people talking enough about is the obvious need for this:
large government funding (eg, in the US).
Our is an incredibly large and difficult mission—to smoothly integrate humans, their qualia and values into the coming AI.
The government funding, of course, should not be directed by bureaucrats deciding on their own, but by, e.g., Singularity Institute and other Friendly AI, human-integration proponents .
I.e., government funding should be directed by a formidable Singularity preparation Political Action Committee
I’ve recently thrown together this site-in-progress: http://singularity-pac.com/ for that purpose.
However, it would be better to leverage existing, developed organizations such as the Singularity Institute, University, Hub, etc,
For my part, I’d like to raise awareness, and am leading development on singularity games. The proceeds, I’d like to fund a singularity PAC.
It might also be good to just directly ask eg the Gates Foundation or similar for the PAC money and get things rolling already.
What are you thoughts on this?
Two hesitations in re lots of cash:
1) What would SI do with the money? My sense is that the current management structure would be hard-pressed to absorb more than, say, twice what they currently have.
2) Government money comes with sometimes onerous obligations in terms of disclosure, transparency, etc etc. It may not be cost-effective. I don’t know about foundation money but I’m not sure how hands-off the Gates people are.