If you meant to say x-risk reduction is high-status in the EA/LW community, then yes, that makes a lot more sense than what you originally said.
But I’m not actually sure how true this is in the broader EA community. E.g. GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven’t publicly advocated x-risk reduction. So my guess is that x-risk reduction is basically just high status in the LW/MIRI/FHI world, and maybe around CEA as well due to their closeness to FHI. To the extent that x-risk reduction is high-status in that world, we should expect a bias toward x-risk reduction, but that’s a pretty small world. There’s a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction, and for this and other reasons we should expect on net for Earth to pay way, way less attention to x-risk than is warranted.
GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven’t publicly advocated x-risk reduction.
GiveWell is doing shallow analyses of catastrophic risks, and Peter Singer has written favorably on reducing x-risk, although not endorsing particular charities or interventions, and it’s not a regular theme in his presentations.
There’s a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction
Why do you think that there’s a bias against x-risk reduction in the broader world? I think that there’s a pretty strong case for x-risk reduction being underprioritized from a utilitarian perspective. But I don’t think that I’ve seen compelling evidence that it’s unappealing relative to a randomly chosen cause.
By “randomly chosen cause,” do you mean something like “Randomly chosen among the charitable causes which have at least $500k devoted to them each year” or do you mean “Randomly chosen in the space of potential causes”?
Consider the total amount sent toward the generalized cause of a randomly chosen charity with a budget of at least $500K/year. I.e., not the Local Village Center for the Blind but humanity’s total efforts to help the blind. Compare MIRI and FHI.
I agree that x-risk reduction is a lot less popular than, e.g., caring for the blind, but it doesn’t follow that people are strongly biased against caring about x-risk reduction. Note that x-risk reduction is a relatively new cause (because the issues didn’t become clear until relatively recently), whereas people have been caring for the blind for millennia. Under the circumstances, one would expect much more attention to go toward caring for the blind independently of whether people were biased against x-risk reduction specifically. I expect x-risk reduction to become more popular over time.
If you meant to say x-risk reduction is high-status in the EA/LW community, then yes, that makes a lot more sense than what you originally said.
But I’m not actually sure how true this is in the broader EA community. E.g. GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven’t publicly advocated x-risk reduction. So my guess is that x-risk reduction is basically just high status in the LW/MIRI/FHI world, and maybe around CEA as well due to their closeness to FHI. To the extent that x-risk reduction is high-status in that world, we should expect a bias toward x-risk reduction, but that’s a pretty small world. There’s a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction, and for this and other reasons we should expect on net for Earth to pay way, way less attention to x-risk than is warranted.
GiveWell is doing shallow analyses of catastrophic risks, and Peter Singer has written favorably on reducing x-risk, although not endorsing particular charities or interventions, and it’s not a regular theme in his presentations.
Thanks, I didn’t know about the Singer article.
Why do you think that there’s a bias against x-risk reduction in the broader world? I think that there’s a pretty strong case for x-risk reduction being underprioritized from a utilitarian perspective. But I don’t think that I’ve seen compelling evidence that it’s unappealing relative to a randomly chosen cause.
By “randomly chosen cause,” do you mean something like “Randomly chosen among the charitable causes which have at least $500k devoted to them each year” or do you mean “Randomly chosen in the space of potential causes”?
The former.
Consider the total amount sent toward the generalized cause of a randomly chosen charity with a budget of at least $500K/year. I.e., not the Local Village Center for the Blind but humanity’s total efforts to help the blind. Compare MIRI and FHI.
Agreed.
Search for ‘million donation’ on news.google.com, first two pages:
Kentucky college gets record $250 million gift
$20-million Walton donation will boost Teach for America in LA
NIH applauds $30 million donation from NFL
Emerson College gets $2 million donation
Jim Pattison makes $5 million donation for Royal Jubilee Hospital
Eric and Wendy Schmidt donate $15 million for Governors Island park
Every time I hear a dollar amount on the news, I cringe at realizing how pathetic spending on existential risks is by comparison.
I agree that x-risk reduction is a lot less popular than, e.g., caring for the blind, but it doesn’t follow that people are strongly biased against caring about x-risk reduction. Note that x-risk reduction is a relatively new cause (because the issues didn’t become clear until relatively recently), whereas people have been caring for the blind for millennia. Under the circumstances, one would expect much more attention to go toward caring for the blind independently of whether people were biased against x-risk reduction specifically. I expect x-risk reduction to become more popular over time.