Donors driven by signalling, prestige, or warm fuzzies tend to be unhappy when charities they donate to don’t get results. But effective altruists know that individually, we should just be maximizing expected outcome, and if that requires a high-risk strategy, so be it. In other words, even if we’re personally risk-averse we should be altruistically risk neutral. This (hopefully) means that we can operate something like philanthropic venture capitalists—fund pie-in-the-sky ventures that are too risky for most donors, and thus collect a risk premium (paid in QALYs, not dollars, but it’s the same idea).
I’ll set up the situation with two players. We have Alice, a flesh-and-blood human who is an effective altruist (among other things—being a human she is not a paperclip maximizer). And we have Charlie the charity, an organization.
Notable differences between Alice and Charlie (besides the obvious ones) are that:
Charlie’s utility function decays (in the diminishing marginal returns sense) very slowly compared to Alice’s.
Charlie can viably be risk-neutral, while Alice is unlikely to be.
Given this I’ll posit that it’s probably fine for Charlie to maximize expected outcome and be risk-neutral. It is not fine for Alice to do this.
To formulate this in a slightly different way, it’s OK for Alice to give money to Charlie to enable it to act in the maximize-the-expected-outcome manner (e.g. as a philanthropic VC) but it’s not OK for Alice to run her entire life this way.
Sorry, I don’t understand your reply.
Here’s Ben Kuhn on risk neutrality:
Do you agree with this reasoning?
Well, let’s unpack.
I’ll set up the situation with two players. We have Alice, a flesh-and-blood human who is an effective altruist (among other things—being a human she is not a paperclip maximizer). And we have Charlie the charity, an organization.
Notable differences between Alice and Charlie (besides the obvious ones) are that:
Charlie’s utility function decays (in the diminishing marginal returns sense) very slowly compared to Alice’s.
Charlie can viably be risk-neutral, while Alice is unlikely to be.
Given this I’ll posit that it’s probably fine for Charlie to maximize expected outcome and be risk-neutral. It is not fine for Alice to do this.
To formulate this in a slightly different way, it’s OK for Alice to give money to Charlie to enable it to act in the maximize-the-expected-outcome manner (e.g. as a philanthropic VC) but it’s not OK for Alice to run her entire life this way.