That’s a difficult question, but a potentially valuable one to have answered. Here’s a long list of thoughts I came up with, written not to Michael Vassar but to a regular supporter of SI:
Donations are maximally fungible and require no overhead or supervision. From the outside, you may not see how a $5000 donation to SI changes the world, but I sure as hell do. An extra $5000 means I can print 600 copies of a paperback of the first 17 chapters of HPMoR and ship one copy each to the top 600 most promising young math students (on observable indicators, like USAMO score) in the U.S. (after making contact with them whenever possible). An extra $5000 means I can produce nicely-formatted Kindle and PDF versions of The Sequences, 2006-2009 and Facing the Singularity. An extra $5000 means I can run a nationwide essay contest for the best high school essay on the importance of AI safety (to bring the topic to the minds of AI-interested high schoolers, and to find some good writers who care about AI safety). An extra $5000 means I can afford a bit more than a month of work from a new staff researcher (including salary, health coverage, and taxes).
Remember that a good volunteer is hard to find. After about a year of interacting with people who claim to want to volunteer, I can say this: If somebody approaches me who (1) has obvious skills that can produce value for SI, (2) claims they have 10+ hours/week available, and (3) claims they really want to help out, then I can predict with 60% confidence that they won’t do any valuable volunteer work for SI in the next three months. Because of this, an enormous amount of overhead goes into chunking tasks so that volunteers can do them, then handing one task to Volunteer #1, waiting for them to watch TV instead, then handing that task to Volunteer #2, waiting for them to watch TV instead, then handing that task to Volunteer #3, etc. Of course, this means that if you’re one of the volunteers doing actual work, you are a rare gem and we thank you mucho.
CFAR can generally make better use of volunteers than SI can. My guesses as to why this is the case: (1) CFAR work is more emotionally motivating work because you’re producing visible effects in human lives now rather than very slightly increasing the chances that trillions of future people will have the opportunity to live out happy lives. (2) SI volunteer-doable tasks tend to either be things that (a) anyone could do, or (b) almost nobody can do because of the amount of domain knowledge required. There’s nobody to do tasks of type (b), and few people like to do tasks of type (a) because it doesn’t require their special skills. In contrast, CFAR has many volunteer-doable tasks that can be done by lots of people but not just anyone — i.e. tasks that make use of special skills in a way that is more motivating than others. (3) CFAR has some habits that motivate volunteers that SI hasn’t been able to mimic yet.
People generally become more useful to SI/CFAR when they move to one of the major SI/LW/CFAR hubs of people: the Bay Area or NYC. I suspect this is because (1) regular in-person contact with us reminds people of stuff we’re doing that they care about, is viscerally motivating, and allows for more opportunities to be involved than are available remotely, and because (2) LWers tend to become happier when they move to a place where lots of other aspiring rationalists are doing cool stuff. (See: The Good News of Situationist Psychology.)
Obviously, the most valuable-to-SI activity that someone can do (besides making money and donating it) will vary from person to person. I’ll give some examples below.
Examples of useful SI volunteer activites: help to moderate LW; contribute to the LW wiki; run an LW meetup group; help to translate Facing the Singularity into other languages; join our “document production team” to assist with porting The Sequences and research papers into pretty LaTeX templates; sign up for the Singularity Institute affinity card; sign up to be a Singularity Institute volunteer advisor; help us distribute Singularity Summit flyers at science and technology events in the Bay Area (contact malo@intelligence.org); tell people about the Summmit and encourage them to buy their tickets before the August 15th price increase. We are currently building a new “volunteer intake system” so that we can more efficiently direct incoming volunteers to useful tasks they will feel good about helping us with.
Make yourself stronger and gain influence in the world, so as to pivot the world in strategic ways when it becomes much clearer which particular pivots would reduce AI risk. E.g. become prestigious in math, AI, or physics so you can spread x-risk memes. Work toward becoming a policy-maker that would influence the spending of research money for technology projects so that you can assist in differential technological development. Become an editor at important media outlets so that you can help x-risk and rationality content see the light of day. Etc.
If you’re a researcher in math, compsci, or formal philosophy, find ways to take up research projects that both advance your career and are useful for x-risk reduction. So You Want to Save the World can help you think about potential research projects of that type, and Eliezer’s forthcoming sequence on “Open Problems in Friendly AI” will also help.
I could list random thoughts on the subject for hours, but… I doubt I can answer your question. It depends too much on individual details about their skills, experience, availability, other opportunities, etc.
That’s a difficult question, but a potentially valuable one to have answered. Here’s a long list of thoughts I came up with, written not to Michael Vassar but to a regular supporter of SI:
Donations are maximally fungible and require no overhead or supervision. From the outside, you may not see how a $5000 donation to SI changes the world, but I sure as hell do. An extra $5000 means I can print 600 copies of a paperback of the first 17 chapters of HPMoR and ship one copy each to the top 600 most promising young math students (on observable indicators, like USAMO score) in the U.S. (after making contact with them whenever possible). An extra $5000 means I can produce nicely-formatted Kindle and PDF versions of The Sequences, 2006-2009 and Facing the Singularity. An extra $5000 means I can run a nationwide essay contest for the best high school essay on the importance of AI safety (to bring the topic to the minds of AI-interested high schoolers, and to find some good writers who care about AI safety). An extra $5000 means I can afford a bit more than a month of work from a new staff researcher (including salary, health coverage, and taxes).
Remember that a good volunteer is hard to find. After about a year of interacting with people who claim to want to volunteer, I can say this: If somebody approaches me who (1) has obvious skills that can produce value for SI, (2) claims they have 10+ hours/week available, and (3) claims they really want to help out, then I can predict with 60% confidence that they won’t do any valuable volunteer work for SI in the next three months. Because of this, an enormous amount of overhead goes into chunking tasks so that volunteers can do them, then handing one task to Volunteer #1, waiting for them to watch TV instead, then handing that task to Volunteer #2, waiting for them to watch TV instead, then handing that task to Volunteer #3, etc. Of course, this means that if you’re one of the volunteers doing actual work, you are a rare gem and we thank you mucho.
CFAR can generally make better use of volunteers than SI can. My guesses as to why this is the case: (1) CFAR work is more emotionally motivating work because you’re producing visible effects in human lives now rather than very slightly increasing the chances that trillions of future people will have the opportunity to live out happy lives. (2) SI volunteer-doable tasks tend to either be things that (a) anyone could do, or (b) almost nobody can do because of the amount of domain knowledge required. There’s nobody to do tasks of type (b), and few people like to do tasks of type (a) because it doesn’t require their special skills. In contrast, CFAR has many volunteer-doable tasks that can be done by lots of people but not just anyone — i.e. tasks that make use of special skills in a way that is more motivating than others. (3) CFAR has some habits that motivate volunteers that SI hasn’t been able to mimic yet.
People generally become more useful to SI/CFAR when they move to one of the major SI/LW/CFAR hubs of people: the Bay Area or NYC. I suspect this is because (1) regular in-person contact with us reminds people of stuff we’re doing that they care about, is viscerally motivating, and allows for more opportunities to be involved than are available remotely, and because (2) LWers tend to become happier when they move to a place where lots of other aspiring rationalists are doing cool stuff. (See: The Good News of Situationist Psychology.)
Obviously, the most valuable-to-SI activity that someone can do (besides making money and donating it) will vary from person to person. I’ll give some examples below.
Examples of useful SI volunteer activites: help to moderate LW; contribute to the LW wiki; run an LW meetup group; help to translate Facing the Singularity into other languages; join our “document production team” to assist with porting The Sequences and research papers into pretty LaTeX templates; sign up for the Singularity Institute affinity card; sign up to be a Singularity Institute volunteer advisor; help us distribute Singularity Summit flyers at science and technology events in the Bay Area (contact malo@intelligence.org); tell people about the Summmit and encourage them to buy their tickets before the August 15th price increase. We are currently building a new “volunteer intake system” so that we can more efficiently direct incoming volunteers to useful tasks they will feel good about helping us with.
Make yourself stronger and gain influence in the world, so as to pivot the world in strategic ways when it becomes much clearer which particular pivots would reduce AI risk. E.g. become prestigious in math, AI, or physics so you can spread x-risk memes. Work toward becoming a policy-maker that would influence the spending of research money for technology projects so that you can assist in differential technological development. Become an editor at important media outlets so that you can help x-risk and rationality content see the light of day. Etc.
If you’re a researcher in math, compsci, or formal philosophy, find ways to take up research projects that both advance your career and are useful for x-risk reduction. So You Want to Save the World can help you think about potential research projects of that type, and Eliezer’s forthcoming sequence on “Open Problems in Friendly AI” will also help.
I could list random thoughts on the subject for hours, but… I doubt I can answer your question. It depends too much on individual details about their skills, experience, availability, other opportunities, etc.
This might be appropriate for promoting CFAR (at least HPMoR talks about rationality), but surely not for promoting SI.
What are these habits?
If we could identify them, we’d be mimicking them already. :)