It seems to me that increasing the number of altruistic, intelligent rationalists via education is just a means of explicitly addressing existential risk, so your comment, while interesting, is not directly relevant to multifoliaterose’s post.
The question in the post is whether we should direct our energies explicitly towards risk reduction. I suspect the answer may be irrelevant at the moment, because the best way to reduce existential risk in the long term and the best way to achieve our other goals may both be through education / outreach.
My uncertainty also bears on the question: should I donate to risk reduction charities? I question whether risk reduction charities are the best approach to reducing risk.
I suspect the answer may be irrelevant at the moment, because the best way to reduce existential risk in the long term and the best way to achieve our other goals may both be through education / outreach.
Possibly, but the people you want to target for education/outreach may depend on what you’d like them to eventually do, so it still seems useful to work that out first.
My uncertainty also bears on the question: should I donate to risk reduction charities? I question whether risk reduction charities are the best approach to reducing risk.
The people running such charities have surely already thought of the idea that education/outreach is currently the best way to reduce risk. For example, SIAI is apparently already spending almost all of its money and volunteer time on education and outreach (such as LW, Eliezer’s rationality book, the visiting fellows program, the Singularity Summit).
The people running such charities have surely already thought of the idea that education/outreach is currently the best way to reduce risk. For example, SIAI is apparently already spending almost all of its money and volunteer time on education and outreach (such as LW, Eliezer’s rationality book, the visiting fellows program, the Singularity Summit).
If you believe that education has a significant effect on existential risk, then charities not explicitly concerned with existential risk may nevertheless be more effectively mitigating it as a byproduct than, say, the SIAI. In particular, you shouldn’t dismiss non risk-reducing charities out of hand because of a supposed difference of scale.
At face value, you should still expect someone with a reasonable ultimate goal to have a more effective focus. But this effect may be counteracted if there is a large difference in competence, or other mitigating factors such as social influence.
It seems to me that increasing the number of altruistic, intelligent rationalists via education is just a means of explicitly addressing existential risk, so your comment, while interesting, is not directly relevant to multifoliaterose’s post.
The question in the post is whether we should direct our energies explicitly towards risk reduction. I suspect the answer may be irrelevant at the moment, because the best way to reduce existential risk in the long term and the best way to achieve our other goals may both be through education / outreach.
My uncertainty also bears on the question: should I donate to risk reduction charities? I question whether risk reduction charities are the best approach to reducing risk.
Possibly, but the people you want to target for education/outreach may depend on what you’d like them to eventually do, so it still seems useful to work that out first.
The people running such charities have surely already thought of the idea that education/outreach is currently the best way to reduce risk. For example, SIAI is apparently already spending almost all of its money and volunteer time on education and outreach (such as LW, Eliezer’s rationality book, the visiting fellows program, the Singularity Summit).
If you believe that education has a significant effect on existential risk, then charities not explicitly concerned with existential risk may nevertheless be more effectively mitigating it as a byproduct than, say, the SIAI. In particular, you shouldn’t dismiss non risk-reducing charities out of hand because of a supposed difference of scale.
At face value, you should still expect someone with a reasonable ultimate goal to have a more effective focus. But this effect may be counteracted if there is a large difference in competence, or other mitigating factors such as social influence.