Apparently, at a recent EA summit Robin Hanson berated the attendees for giving to more than one charity. I think his critique is salient: given our human scope insensitivity, giving all your charity-money to one cause feels like helping with only *one* thing, even if that one organization does vastly more good, much more efficiently, than any other group, and so every dollar given to that organization does more good than an anything else that could be done with that dollar. More rational and more effective is to find the most efficient charity and give only to that charity, until it has achieved its goal so completely that it is no longer the most efficient charity.
That said, I feel that there are at least some circumstances under which it is appropriate to divide one’s charity dollars: those that include risky investments.
If a positive singularity were to occur, the impact would be enormous: it would swamp any other good that I could conceivably do. Yet, I don’t know how likely a positive singularity is; it seems to be a long shot. Furthermore, I don’t know how much my charity dollars affect the probability one way or another. It may be that a p-singularity will either happen or it won’t, and there’s not much I can do about it. There’s a huge pay-off but high uncertainty. In contrast, I could (for instance) buy mosquito nets for third world counties, which has a lower, but much more certain pay-off.
Some people are more risk-seeking than others, and it seems to be a matter of preference whether one takes risky bets or more certain ones. However, there are “irrational” answers, since one can calculate the expected pay-off of a gambit by mere multiplication. It is true that it is imprudent to bet one’s life savings on an unlikely chance of unimaginable wealth, but this is because of quirks of human utility calculation: losses are more painful than gain are enjoyable, and there is a law of diminishing marginal returns in play (to most of us, a gift of a billion dollars is not very emotionally different than two billion, and we would not be indifferent between a 100% chance of getting a billion dollars and a 50% chance of getting two billion dollars on the one hand, and a 50% chance of getting nothing on the other. In fact, I would trade my 50⁄50 chance of a billion, for a 100% certainty of a 10 million). But, we would do well to stick to mathematically calculated expected-pay-offs, for any “games” that are small enough or frequent enough, that improbable flukes will be canceled out on the net.
Let’s say you walk into the psychology department, Kahneman and Tversky offer you a trade off: you can save 50 lives, or you can “sell” some or all of those lives for a 0.005% increase in the probability of an outcome in which no one ever dies again and every problem that has ever plagued humanity is solved and post-humans impregnate the universe with life. That sounds fantastic, but at best you can only increase the probability of such an outcome by a quarter of a percent. Is any ratio of “lives saved” to “incremental increases in the probability of total awesomeness” rational? Is it just a matter of personal preference how much risk you personally decide to take on? Ought you to determine your conversion factor between human lives and increases in the probability of a p-singularity, and go all in based on whether the ratio that is offered you is above or below your own (i.e. you’re getting a “good deal”)?
I feel like there’s a good chance that we’ll screw it all up and be extinct in the next 200 years. I want to stop that, but I also want to hedge my bets. If it does all go boom, I want to have spent at least some of my resources making the time we have better for as many people as possible. It even seems selfish to to not help those in need so that I can push up the probability of an awesome, but highly uncertain future. That feels almost like making reckless investment with other people’s money. But maybe I just haven’t gotten myself out of the cognitive-trap that Robin accused us off.
Should we go all in on existential risk? - Considering Effective Altruism
Apparently, at a recent EA summit Robin Hanson berated the attendees for giving to more than one charity. I think his critique is salient: given our human scope insensitivity, giving all your charity-money to one cause feels like helping with only *one* thing, even if that one organization does vastly more good, much more efficiently, than any other group, and so every dollar given to that organization does more good than an anything else that could be done with that dollar. More rational and more effective is to find the most efficient charity and give only to that charity, until it has achieved its goal so completely that it is no longer the most efficient charity.
That said, I feel that there are at least some circumstances under which it is appropriate to divide one’s charity dollars: those that include risky investments.
If a positive singularity were to occur, the impact would be enormous: it would swamp any other good that I could conceivably do. Yet, I don’t know how likely a positive singularity is; it seems to be a long shot. Furthermore, I don’t know how much my charity dollars affect the probability one way or another. It may be that a p-singularity will either happen or it won’t, and there’s not much I can do about it. There’s a huge pay-off but high uncertainty. In contrast, I could (for instance) buy mosquito nets for third world counties, which has a lower, but much more certain pay-off.
Some people are more risk-seeking than others, and it seems to be a matter of preference whether one takes risky bets or more certain ones. However, there are “irrational” answers, since one can calculate the expected pay-off of a gambit by mere multiplication. It is true that it is imprudent to bet one’s life savings on an unlikely chance of unimaginable wealth, but this is because of quirks of human utility calculation: losses are more painful than gain are enjoyable, and there is a law of diminishing marginal returns in play (to most of us, a gift of a billion dollars is not very emotionally different than two billion, and we would not be indifferent between a 100% chance of getting a billion dollars and a 50% chance of getting two billion dollars on the one hand, and a 50% chance of getting nothing on the other. In fact, I would trade my 50⁄50 chance of a billion, for a 100% certainty of a 10 million). But, we would do well to stick to mathematically calculated expected-pay-offs, for any “games” that are small enough or frequent enough, that improbable flukes will be canceled out on the net.
Let’s say you walk into the psychology department, Kahneman and Tversky offer you a trade off: you can save 50 lives, or you can “sell” some or all of those lives for a 0.005% increase in the probability of an outcome in which no one ever dies again and every problem that has ever plagued humanity is solved and post-humans impregnate the universe with life. That sounds fantastic, but at best you can only increase the probability of such an outcome by a quarter of a percent. Is any ratio of “lives saved” to “incremental increases in the probability of total awesomeness” rational? Is it just a matter of personal preference how much risk you personally decide to take on? Ought you to determine your conversion factor between human lives and increases in the probability of a p-singularity, and go all in based on whether the ratio that is offered you is above or below your own (i.e. you’re getting a “good deal”)?
I feel like there’s a good chance that we’ll screw it all up and be extinct in the next 200 years. I want to stop that, but I also want to hedge my bets. If it does all go boom, I want to have spent at least some of my resources making the time we have better for as many people as possible. It even seems selfish to to not help those in need so that I can push up the probability of an awesome, but highly uncertain future. That feels almost like making reckless investment with other people’s money. But maybe I just haven’t gotten myself out of the cognitive-trap that Robin accused us off.