We’re trying to reduce existential risk—to increase the odds that an eventual Singularity is good, from the perspective of humane values. To do this, we need more rational, effective people—people who can train to do the needed research, who can fund that or other work, and who can otherwise exert influence toward good outcomes.
I’d be interested in hearing more about how you foresee graduates of these camps working to reduce existential risk, especially as a donor to the SIAI. Is there a long term plan in place or are you just trying some things out?
I’d be interested in hearing more about how you foresee graduates of these camps working to reduce existential risk, especially as a donor to the SIAI. Is there a long term plan in place or are you just trying some things out?