Spin off the Center for Applied Rationality as a separate organization focused on rationality training, so that the Singularity Institute can be focused more exclusively on Singularity research and outreach.
Publish additional research on AI risk and Friendly AI.
Eliezer will write an “Open Problems in Friendly AI” sequence for Less Wrong. (For news on his rationality books, see here.)
If you’re planning to earmark your donation to CFAR (Center for Applied Rationality), here’s a preview of what CFAR plans to do in the next year:
Develop additional lessons teaching the most important and useful parts of rationality. CFAR has already developed and tested over 18 hours of lessons so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more.
Run immersive rationality retreats to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you came? (1-10)” was a 9.4. (We don’t have the June data yet, but people were similarly enthusiastic about that one.)
Run SPARC, a camp on the advanced math of rationality for mathematically gifted high school students. CFAR has a stellar first-year class for SPARC 2012; most students admitted to the program placed in the top 50 on the USA Math Olympiad (or performed equivalently in a similar contest).
Collect longitudinal data on the effects of rationality training, to improve our curriculum and to generate promising hypotheses to test and publish, in collaboration with other researchers. CFAR has already launched a one-year randomized controlled study tracking reasoning ability and various metrics of life success, using participants in our June minicamp and a control group.
Develop apps and games about rationality, with the dual goals of (a) helping aspiring rationalists practice essential skills, and (b) making rationality fun and intriguing to a much wider audience. CFAR has two apps in beta testing: one training players to update their own beliefs the right amount after hearing other people’s beliefs, and another training players to calibrate their level of confidence in their own beliefs. CFAR is working with a developer on several more games training people to avoid cognitive biases.
In summary, I think SI is a bit behind where I hoped we’d be by now, though this is largely because we’ve poured so much into launching CFAR, and as a result, CFAR has turned out to be significantly more cool at launch than I had anticipated.
Fundraising has been a challenge. One donor failed to actually give their $46,000 pledge despite repeated reminders and requests, and our support base is (understandably) anxious to see a shift from movement-building work to FAI research, a shift I have been fighting for since I was made Executive Director. (Note that spinning off rationality work to CFAR is a substantial part of trimming SI down into being primarily an FAI research institute.)
Reforming SI into a more efficient, effective organization has been my greatest challenge. Frankly, SI was in pretty bad shape when Louie and I arrived as interns in April 2011, and there have been an incredible number of holes to dig SI out of — and several more remain. (In contrast, it has been a joy to help set up CFAR properly from the very beginning, with all the right organizational tools and processes in place.) Reforming SI presents a fundraising problem, because reforming SI is time consuming and sometimes costly, but is generally unexciting to donors.
I can see the light at the end of the tunnel, though. We won’t reach it if we can’t improve our fundraising success in the next 3-6 months, but it’s close enough that I can see it. SI’s path forward, from my point of view, looks like this:
We finish launching CFAR, which takes over the rationality work SI was doing. (Before January 2013.)
We change how the Singularity Summit is planned and run so that it pulls our core staff away from core mission work to a lesser degree. (Before January 2013.)
Eliezer writes the “Open Problems in Friendly AI” sequence. (Before January 2013.)
We hire 1-2 researchers to produce technical write-ups from Eliezer’s TDT article and from his “Open Problems in Friendly AI” sequence. (Beginning September 2012, except that right now we don’t have the cash to hire the 1-2 people who I know who could do this and who want to do this as soon as we have the money to hire them.)
With the “Open FAI Problems” sequence and the technical write-ups in hand, we greatly expand our efforts to show math/compsci researchers that there is a tractable, technical research program in FAI theory, and as a result some researchers work on the sexiest of these problems from their departments, and some other math researchers take more seriously the prospect of being hired by SI to do technical research in FAI theory. (Beginning, roughly, in April 2013.) Also: There won’t be classes on x-risk at SPARC (rationality camp for young elite math talent), but some SPARC students might end up being interested in FAI stuff by osmosis.
With a more tightly honed SI, improved fundraising practices, and visible mission-central research happening, SI is able to attract more funding and hire even more FAI researchers. (Beginning, roughly, in September 2013.)
SI’s Summer 2012 Matching Drive Ends July 31st
The Singularity Institute’s summer 2012 matching drive ends on July 31st! Donate by the end of the month to have your gift matched, dollar for dollar.
As of this posting, SI has raised $70,000 of the $150,000 goal.
The announcement says:
In another post, I compared the goals in our August 2011 strategic plan to our current situation, summarizing: