Here is one of several emails I’ve now received in response to my repeated request that potential research collaborators contact me (quoted with permission):
My name is [name]. I am a first year student at [a university] majoring in pure math… I am rather intelligent; I estimate my score on the recent Putnam contest to be thirty, and the consensus is that the questions were of above average difficulty this year. I really care about the Singularity Institute’s mission; I have been a utilitarian since age 11, before I knew that the idea had a name and I have cared about existential risk since at least age twelve, when I wrote a short piece on why prevention of the heat death was the greatest moral imperative for humankind (I had come up with the idea of what was essentially a Brownian ratchet years before I read the proof of the H-theorem showing the irreversible increase in entropy).
I want to help with the theory of friendly AI. I currently think that I could work directly on the problem but if my comparative advantage is elsewhere I would like to know that… I would be interested in participating in a rationality camp, the Visiting fellows program or anything else that could help the Singularity Institute.
Ask and you shall receive.
Here is one of several emails I’ve now received in response to my repeated request that potential research collaborators contact me (quoted with permission):
Keep ’em coming, people!