How to Purchase AI Risk Reduction
I’m writing a series of discussion posts on how to purchase AI risk reduction (through donations to the Singularity Institute, anyway; other x-risk organizations will have to speak for themselves about their plans).
Each post outlines a concrete proposal, with cost estimates:
Also see John Maxwell’s Brainstorming additional AI risk reduction ideas.
(For a quick primer on AI risk, see Facing the Singularity.)
- Reply to Holden on ‘Tool AI’ by 12 Jun 2012 18:00 UTC; 152 points) (
- Reply to Holden on The Singularity Institute by 10 Jul 2012 23:20 UTC; 69 points) (
- 2012 Winter Fundraiser for the Singularity Institute by 6 Dec 2012 22:41 UTC; 48 points) (
- Building toward a Friendly AI team by 6 Jun 2012 18:57 UTC; 39 points) (
- Funding Good Research by 27 May 2012 6:41 UTC; 38 points) (
- Short Primers on Crucial Topics by 31 May 2012 0:46 UTC; 35 points) (
- Proposal for “Open Problems in Friendly AI” by 1 Jun 2012 2:06 UTC; 33 points) (
- A Scholarly AI Risk Wiki by 25 May 2012 20:53 UTC; 28 points) (
- Building the AI Risk Research Community by 1 Jun 2012 2:13 UTC; 26 points) (
- Raising safety-consciousness among AGI researchers by 2 Jun 2012 21:39 UTC; 21 points) (
- SI’s Summer 2012 Matching Drive Ends July 31st by 20 Jul 2012 5:48 UTC; 19 points) (
- Strategic research on AI risk by 6 Jun 2012 17:02 UTC; 13 points) (
- Reaching young math/compsci talent by 2 Jun 2012 21:07 UTC; 10 points) (
- 28 Jun 2012 12:33 UTC; 6 points) 's comment on Backward Reasoning Over Decision Trees by (
- 1 Jun 2012 3:15 UTC; 5 points) 's comment on Proposal for “Open Problems in Friendly AI” by (
- 21 Jun 2012 7:45 UTC; 4 points) 's comment on Help me make SI’s arguments clear by (
- 12 Jul 2012 15:23 UTC; 2 points) 's comment on Reply to Holden on The Singularity Institute by (
- 11 Jul 2012 20:04 UTC; 1 point) 's comment on Reply to Holden on The Singularity Institute by (
Your link to Facing the Singularity and the link embedded in the picture both redirect to this page.
Both links work fine for me.
I fixed them shortly after Dorikka posted.
What I don’t see people talking enough about is the obvious need for this:
large government funding (eg, in the US).
Our is an incredibly large and difficult mission—to smoothly integrate humans, their qualia and values into the coming AI.
The government funding, of course, should not be directed by bureaucrats deciding on their own, but by, e.g., Singularity Institute and other Friendly AI, human-integration proponents .
I.e., government funding should be directed by a formidable Singularity preparation Political Action Committee
I’ve recently thrown together this site-in-progress: http://singularity-pac.com/ for that purpose.
However, it would be better to leverage existing, developed organizations such as the Singularity Institute, University, Hub, etc,
For my part, I’d like to raise awareness, and am leading development on singularity games. The proceeds, I’d like to fund a singularity PAC.
It might also be good to just directly ask eg the Gates Foundation or similar for the PAC money and get things rolling already.
What are you thoughts on this?
Two hesitations in re lots of cash:
1) What would SI do with the money? My sense is that the current management structure would be hard-pressed to absorb more than, say, twice what they currently have.
2) Government money comes with sometimes onerous obligations in terms of disclosure, transparency, etc etc. It may not be cost-effective. I don’t know about foundation money but I’m not sure how hands-off the Gates people are.