I agree. But getting people to accept optimal philanthropy in uncontroversial domains is a neccessary precursor to getting them to accept x-risk. In fact I have had conversations with people high-up in organizations like Givewell and GWWC who used this explicit argument: get reputational capital from succeeding at 3rd world poverty, then expend it on x-risk.
Exactly. Even if a LWer is convinced giving to existential risk charities is optimal, they should still be in favor of persuading people to become better philanthropists in uncontroversial domains whenever it’s not possible to directly persuade people to be proponents of existential risk reduction.
I agree. But getting people to accept optimal philanthropy in uncontroversial domains is a neccessary precursor to getting them to accept x-risk. In fact I have had conversations with people high-up in organizations like Givewell and GWWC who used this explicit argument: get reputational capital from succeeding at 3rd world poverty, then expend it on x-risk.
Exactly. Even if a LWer is convinced giving to existential risk charities is optimal, they should still be in favor of persuading people to become better philanthropists in uncontroversial domains whenever it’s not possible to directly persuade people to be proponents of existential risk reduction.
I have to know… why ‘Formally’? It’s distracting me while I read the new comments thread. :)