In general, there seems to have been substantial planning fallacy on the ease of getting skilled people to make progress on them via the Visiting Fellows program and other means. Versions of many of them have eventually come into being (as discussed below) but with great delays. And it seems that delivery of the planned reporting infrastructure failed badly. With respect to the individual papers:
The WBE-AGI one has lagged, but is a submission to the JCS special issue Chalmers’ Singularity paper (by myself and Anders Sandberg), with presentations of the content at FHI, San Diego State University, and the AGI-11 workshop on the future of AI.
AI risk philanthropy was taken on by an external author who never delivered, and subsequently had to be transferred to a different person who hasn’t finished it yet.
There is an incarnation of the Singularity FAQ, and lukeprog, along with Anna Salamon, have custody of the landing pages project, with an academic one in place (although they are trading off against the minicamp/bootcamp timewise).
And then there have been various other non-Challenge papers, like Bostrom and Yudkowsky’s joint piece on AI ethics, my piece with Bostrom on inference from evolution to AI difficulty, etc.
Right, it didn’t get earmarked donations, only two papers were specifically funded in the challenge grant. In general, mostly people weren’t interested in funding specific projects, and the challenge primarily went to general funds.
In general, there seems to have been substantial planning fallacy on the ease of getting skilled people to make progress on them via the Visiting Fellows program and other means. Versions of many of them have eventually come into being (as discussed below) but with great delays. And it seems that delivery of the planned reporting infrastructure failed badly. With respect to the individual papers:
.Containing superintelligence led to this paper which was accepted for a subsequently-cancelled conference and is now seeking a venue, as well as (I believe) an accepted Singularity Hypothesis chapter by Daniel Dewey.
The WBE-AGI one has lagged, but is a submission to the JCS special issue Chalmers’ Singularity paper (by myself and Anders Sandberg), with presentations of the content at FHI, San Diego State University, and the AGI-11 workshop on the future of AI.
Collective Action Problems and AI Risk led to another Singularity Hypothesis submission.
AI risk philanthropy was taken on by an external author who never delivered, and subsequently had to be transferred to a different person who hasn’t finished it yet.
There is an incarnation of the Singularity FAQ, and lukeprog, along with Anna Salamon, have custody of the landing pages project, with an academic one in place (although they are trading off against the minicamp/bootcamp timewise).
The Coherence of Human Goals led to this paper, at AGI-10.
The Visitors Grants were used.
The two papers at the top, submissions for the cancelled Minds and Machines issue funded before the Challenge, went into limbo after the cancellation.
Software Minds and Endogenous Growth led to a paper at ECAP-2010 which is under continued development but is not a journal article yet.
And then there have been various other non-Challenge papers, like Bostrom and Yudkowsky’s joint piece on AI ethics, my piece with Bostrom on inference from evolution to AI difficulty, etc.
Note that that one wasn’t actually funded. The ECAP paper is online here.
Right, it didn’t get earmarked donations, only two papers were specifically funded in the challenge grant. In general, mostly people weren’t interested in funding specific projects, and the challenge primarily went to general funds.