Sorry about the table of contents! The LessWrong versions of my posts are auto-generated (the originals appear here).
I think your comments about variance could technically be cast in terms of diminishing marginal returns. If having zero (or negative) impact is “especially bad”, this implies that going from zero to small positive impact is “more valuable” to you than going from small positive to large positive impact (assuming we have some meaningful units of impact we’re using). UH’s argument is that this shouldn’t be the case.
The point about variance eroding returns is an interesting one and not addressed in the piece. I think the altruistic equivalent would be something like: “If humanity stakes all of its resources on something that doesn’t work out, we get wiped out and don’t get to see future opportunities; if humanity simply loses a large amount in such fashion, this diminishes its ability to try other things that might go well.” But I think the relevant actor here is mostly/probably humanity, not an altruistic individual—humanity would indeed “erode its returns” by putting too high a percentage of its resources into particular things, but it’s not clear that a similar dynamic applies for an altruistic individual (that is, it isn’t really clear that one can “reinvest” the altruistic gains one realizes, or that a big enough failure to have impact wipes someone “out of the game” as an altruistic actor).
Sorry about the table of contents! The LessWrong versions of my posts are auto-generated (the originals appear here).
I think your comments about variance could technically be cast in terms of diminishing marginal returns. If having zero (or negative) impact is “especially bad”, this implies that going from zero to small positive impact is “more valuable” to you than going from small positive to large positive impact (assuming we have some meaningful units of impact we’re using). UH’s argument is that this shouldn’t be the case.
The point about variance eroding returns is an interesting one and not addressed in the piece. I think the altruistic equivalent would be something like: “If humanity stakes all of its resources on something that doesn’t work out, we get wiped out and don’t get to see future opportunities; if humanity simply loses a large amount in such fashion, this diminishes its ability to try other things that might go well.” But I think the relevant actor here is mostly/probably humanity, not an altruistic individual—humanity would indeed “erode its returns” by putting too high a percentage of its resources into particular things, but it’s not clear that a similar dynamic applies for an altruistic individual (that is, it isn’t really clear that one can “reinvest” the altruistic gains one realizes, or that a big enough failure to have impact wipes someone “out of the game” as an altruistic actor).