I love how useless the headings on the left are. I guess it’s not really set up for dialogs.
ANYway, I thought this bit was interesting to think about more:
I think it’s reasonable to diversify your investments and your personal spending. The reason I think it’s reasonable is essentially because of diminishing marginal returns.
There’s another key consideration and that’s variance. If you have all your investments in the highest EV stock, there’s some chance that it will go to zero and you’ll lose everything, and for most people, that’s a super bad outcome worth paying something to avoid. Also, more subtly, variance erodes returns—if you have $100 and it goes up 10% and down 10% in either order, you have $99. If it goes up and down 50%, you end up with $75.
I think you could make a similar argument for large charitable contributions. Suppose it turns out that the insecticide in bednets has large bad downstream effects—now all your gains are wiped out. Whereas if you also funded the second best thing as well, your overall EV might be lower, but in return you reduce the chance of really bad outcomes.
I expect that the gains for saving a life are so large and the cost so low that in practice it still makes sense to focus on the top very few opportunities with marginal dollars. But I suspect that if someone is skeptical of that argument, the rejoinder here around diversification is missing something.
Sorry about the table of contents! The LessWrong versions of my posts are auto-generated (the originals appear here).
I think your comments about variance could technically be cast in terms of diminishing marginal returns. If having zero (or negative) impact is “especially bad”, this implies that going from zero to small positive impact is “more valuable” to you than going from small positive to large positive impact (assuming we have some meaningful units of impact we’re using). UH’s argument is that this shouldn’t be the case.
The point about variance eroding returns is an interesting one and not addressed in the piece. I think the altruistic equivalent would be something like: “If humanity stakes all of its resources on something that doesn’t work out, we get wiped out and don’t get to see future opportunities; if humanity simply loses a large amount in such fashion, this diminishes its ability to try other things that might go well.” But I think the relevant actor here is mostly/probably humanity, not an altruistic individual—humanity would indeed “erode its returns” by putting too high a percentage of its resources into particular things, but it’s not clear that a similar dynamic applies for an altruistic individual (that is, it isn’t really clear that one can “reinvest” the altruistic gains one realizes, or that a big enough failure to have impact wipes someone “out of the game” as an altruistic actor).
I love how useless the headings on the left are. I guess it’s not really set up for dialogs.
ANYway, I thought this bit was interesting to think about more:
There’s another key consideration and that’s variance. If you have all your investments in the highest EV stock, there’s some chance that it will go to zero and you’ll lose everything, and for most people, that’s a super bad outcome worth paying something to avoid. Also, more subtly, variance erodes returns—if you have $100 and it goes up 10% and down 10% in either order, you have $99. If it goes up and down 50%, you end up with $75.
I think you could make a similar argument for large charitable contributions. Suppose it turns out that the insecticide in bednets has large bad downstream effects—now all your gains are wiped out. Whereas if you also funded the second best thing as well, your overall EV might be lower, but in return you reduce the chance of really bad outcomes.
I expect that the gains for saving a life are so large and the cost so low that in practice it still makes sense to focus on the top very few opportunities with marginal dollars. But I suspect that if someone is skeptical of that argument, the rejoinder here around diversification is missing something.
Sorry about the table of contents! The LessWrong versions of my posts are auto-generated (the originals appear here).
I think your comments about variance could technically be cast in terms of diminishing marginal returns. If having zero (or negative) impact is “especially bad”, this implies that going from zero to small positive impact is “more valuable” to you than going from small positive to large positive impact (assuming we have some meaningful units of impact we’re using). UH’s argument is that this shouldn’t be the case.
The point about variance eroding returns is an interesting one and not addressed in the piece. I think the altruistic equivalent would be something like: “If humanity stakes all of its resources on something that doesn’t work out, we get wiped out and don’t get to see future opportunities; if humanity simply loses a large amount in such fashion, this diminishes its ability to try other things that might go well.” But I think the relevant actor here is mostly/probably humanity, not an altruistic individual—humanity would indeed “erode its returns” by putting too high a percentage of its resources into particular things, but it’s not clear that a similar dynamic applies for an altruistic individual (that is, it isn’t really clear that one can “reinvest” the altruistic gains one realizes, or that a big enough failure to have impact wipes someone “out of the game” as an altruistic actor).
It allows for quick navigation, but does not enable an in-depth passive read on contents.