Can’t claim to have put much thought into this topic, but here are my guesses of what the most cost-effective ways of throwing money at the problem of reducing existential risk might include:
Research into human intelligence enhancement, e.g., tech related to embryo selection.
Research into how to design/implement an international AI pause treaty, perhaps x-risk governance in general.
Try to identify more philosophical talent across the world and pay them to make philosophical progress, especially in metaphilosophy. (I’m putting some of my own money into this.)
Strategy think tanks that try to keep a big picture view of everything, propose new ideas or changes to what people/orgs should do, discuss these ideas with the relevant people, etc.
Can’t claim to have put much thought into this topic, but here are my guesses of what the most cost-effective ways of throwing money at the problem of reducing existential risk might include:
Research into human intelligence enhancement, e.g., tech related to embryo selection.
Research into how to design/implement an international AI pause treaty, perhaps x-risk governance in general.
Try to identify more philosophical talent across the world and pay them to make philosophical progress, especially in metaphilosophy. (I’m putting some of my own money into this.)
Research into public understanding of x-risks, what people’s default risk tolerances are, what arguments can or can’t they understand, etc.
Strategy think tanks that try to keep a big picture view of everything, propose new ideas or changes to what people/orgs should do, discuss these ideas with the relevant people, etc.