I’m having trouble deciding where I should target my altruism. I’m basically just starting out in life, and I recognize that this is a really valuable opportunity that I won’t get in the future – I have no responsibilities and no sunk costs yet. For a long time, I’ve been looking into the idea of efficient charity, and I’ve been surfing places like 80K Hours and Give Well. But I get the feeling that I might be too emotionally invested in the idea of donating to, say, disease prevention or inoculation or de-worming (found to be among the most efficient conventional charities) over, say, Friendly AI.
I think that given my skills, personality, etc, there are some good and bad reasons to go into either existential risk mitigation or health interventions, but it’s not like the balance is going to be exactly even. I need some help figuring out what to do – I suppose I could work out some “combination,” but it’s usually not a good idea to hedge your bets in this way because you aren’t making maximum impact.
Direct reasons for health interventions:
Lots of good data, an actual dollar amount per life has been calculated; low risk of failure, whereas for everything I’ve read I’m really not sure about where we stand with x-risk or what to do about it or how to calculate if we’ve reduced it
Easy to switch from charity to charity and intervention to intervention as new evidence rolls in, whereas x-risk requires some long-term commitments to long-range projects, with only one institution there
I would most likely be best off giving money, which I can confidently say I’ll be able to generate; it’s hard to be unclear about whether my actions are making an impact, whereas with x-risk I don’t know how much good my actions are creating or whether they’re even helping at all
Doesn’t require me to have very many more skills than I currently do in order to estimate accurately the costs and benefits
It saves lives immediately and continuously, which will be good for motivation and the probability that I will stick it through and actually be altruistic; I also feel a stronger emotional connection to people who are here today rather than potential future humans, although I don’t know if that’s a mistake or a value
Selfish reasons for health interventions:
It would make me look virtuous and self-sacrificing when I’m really not hurt very much because there aren’t that many material goods I enjoy
Would make me look far less weird and embarrassing at dinner parties
Direct reasons for X-risk:
It is by far the highest-stakes problem we have, whereas with health interventions I would be saving lives one by one, in the dozens or the hundreds, if I actually make some sort of marginal impact on x-risk that could translate to many, many more lives
Helping to reduce x-risk by helping to bring about Friendly AI would help us increase the probability of all sorts of good things, like the end to all disease and stabilization of the economy, whereas the positive externalities of health intervention are not as dramatic
Selfish reasons for X-risk:
I get to feel like I’m on an epic adventure to save the world like all my favorite SF/F heroes
It sounds like it would be a pretty awesomely fun challenge and I’d work with cool people
My personal situation: I’m a senior in high school; I’ve read up quite a bit on both developing world health interventions and x-risk. My programming is abysmal, but I can improve quickly because I like it; I’m going into CS hopefully. I’m not the most highly motivated of people – to be quite honest, I’m a procrastinator and would make less of a marginal impact working for SIAI, than some other readers of LW. (That’s where I stand now, but I want to improve my productivity.)
I would do well in a low-stress, medium high pay job through which I could donate money. I harbor the dream of having a really “cool” job, but it’s not necessarily a priority – I can trade awesomeness and challenge for security and ease, as long as my donated money goes to the most useful cause.
I don’t know what I should do. Low risk, low impact/High risk, high impact/Some specific combination? Any variables I’m missing here? I’d love some data that would make the choice more clear.
I suggest that you practice thinking of yourself as a future member of the posthuman ruling class. In this century, the planet faces the pressures of 9 or 10 billion people trying to raise their standards of living, amid ecological changes like global warming, and at the same time biotechnology, neurotechnology, and artificial intelligence will come tumbling out of Pandora’s box. The challenges of the future are about posthuman people living in a postnatural world, and many of the categories which inform current thinking about “efficient charity” and “existential risk” are liable to become obsolete.
I’m having trouble deciding where I should target my altruism. I’m basically just starting out in life, and I recognize that this is a really valuable opportunity that I won’t get in the future – I have no responsibilities and no sunk costs yet. For a long time, I’ve been looking into the idea of efficient charity, and I’ve been surfing places like 80K Hours and Give Well. But I get the feeling that I might be too emotionally invested in the idea of donating to, say, disease prevention or inoculation or de-worming (found to be among the most efficient conventional charities) over, say, Friendly AI.
I think that given my skills, personality, etc, there are some good and bad reasons to go into either existential risk mitigation or health interventions, but it’s not like the balance is going to be exactly even. I need some help figuring out what to do – I suppose I could work out some “combination,” but it’s usually not a good idea to hedge your bets in this way because you aren’t making maximum impact.
Direct reasons for health interventions:
Lots of good data, an actual dollar amount per life has been calculated; low risk of failure, whereas for everything I’ve read I’m really not sure about where we stand with x-risk or what to do about it or how to calculate if we’ve reduced it
Easy to switch from charity to charity and intervention to intervention as new evidence rolls in, whereas x-risk requires some long-term commitments to long-range projects, with only one institution there
I would most likely be best off giving money, which I can confidently say I’ll be able to generate; it’s hard to be unclear about whether my actions are making an impact, whereas with x-risk I don’t know how much good my actions are creating or whether they’re even helping at all
Doesn’t require me to have very many more skills than I currently do in order to estimate accurately the costs and benefits
It saves lives immediately and continuously, which will be good for motivation and the probability that I will stick it through and actually be altruistic; I also feel a stronger emotional connection to people who are here today rather than potential future humans, although I don’t know if that’s a mistake or a value
Selfish reasons for health interventions:
It would make me look virtuous and self-sacrificing when I’m really not hurt very much because there aren’t that many material goods I enjoy
Would make me look far less weird and embarrassing at dinner parties
Direct reasons for X-risk:
It is by far the highest-stakes problem we have, whereas with health interventions I would be saving lives one by one, in the dozens or the hundreds, if I actually make some sort of marginal impact on x-risk that could translate to many, many more lives
Helping to reduce x-risk by helping to bring about Friendly AI would help us increase the probability of all sorts of good things, like the end to all disease and stabilization of the economy, whereas the positive externalities of health intervention are not as dramatic
Selfish reasons for X-risk:
I get to feel like I’m on an epic adventure to save the world like all my favorite SF/F heroes
It sounds like it would be a pretty awesomely fun challenge and I’d work with cool people
My personal situation: I’m a senior in high school; I’ve read up quite a bit on both developing world health interventions and x-risk. My programming is abysmal, but I can improve quickly because I like it; I’m going into CS hopefully. I’m not the most highly motivated of people – to be quite honest, I’m a procrastinator and would make less of a marginal impact working for SIAI, than some other readers of LW. (That’s where I stand now, but I want to improve my productivity.)
I would do well in a low-stress, medium high pay job through which I could donate money. I harbor the dream of having a really “cool” job, but it’s not necessarily a priority – I can trade awesomeness and challenge for security and ease, as long as my donated money goes to the most useful cause.
I don’t know what I should do. Low risk, low impact/High risk, high impact/Some specific combination? Any variables I’m missing here? I’d love some data that would make the choice more clear.
Sounds like you should ask for a call with the philanthropic career experts at 80,000 Hours if you haven’t already.
I suggest that you practice thinking of yourself as a future member of the posthuman ruling class. In this century, the planet faces the pressures of 9 or 10 billion people trying to raise their standards of living, amid ecological changes like global warming, and at the same time biotechnology, neurotechnology, and artificial intelligence will come tumbling out of Pandora’s box. The challenges of the future are about posthuman people living in a postnatural world, and many of the categories which inform current thinking about “efficient charity” and “existential risk” are liable to become obsolete.