You’re thinking about it from the point of view of the receiving charity. The charity’s payoffs have a hard floor: zero. Essentially the charity has an option (in the financial sense). And because of that it is in the charity’s best interest to drive the volatility (risk, variance, uncertainty) of the “expected monetary return” sky-high—because it is insulated from the bad consequences, remember, the worst thing that could happen to charity is to get zero dollars.
However from the point of view of the individual things look different. His payoffs do NOT have a hard floor. He is fully exposed to all the risk. For him the volatility of the expected return is a bad thing.
Donors driven by signalling, prestige, or warm fuzzies tend to be unhappy when charities they donate to don’t get results. But effective altruists know that individually, we should just be maximizing expected outcome, and if that requires a high-risk strategy, so be it. In other words, even if we’re personally risk-averse we should be altruistically risk neutral. This (hopefully) means that we can operate something like philanthropic venture capitalists—fund pie-in-the-sky ventures that are too risky for most donors, and thus collect a risk premium (paid in QALYs, not dollars, but it’s the same idea).
I’ll set up the situation with two players. We have Alice, a flesh-and-blood human who is an effective altruist (among other things—being a human she is not a paperclip maximizer). And we have Charlie the charity, an organization.
Notable differences between Alice and Charlie (besides the obvious ones) are that:
Charlie’s utility function decays (in the diminishing marginal returns sense) very slowly compared to Alice’s.
Charlie can viably be risk-neutral, while Alice is unlikely to be.
Given this I’ll posit that it’s probably fine for Charlie to maximize expected outcome and be risk-neutral. It is not fine for Alice to do this.
To formulate this in a slightly different way, it’s OK for Alice to give money to Charlie to enable it to act in the maximize-the-expected-outcome manner (e.g. as a philanthropic VC) but it’s not OK for Alice to run her entire life this way.
You’re thinking about it from the point of view of the receiving charity. The charity’s payoffs have a hard floor: zero. Essentially the charity has an option (in the financial sense). And because of that it is in the charity’s best interest to drive the volatility (risk, variance, uncertainty) of the “expected monetary return” sky-high—because it is insulated from the bad consequences, remember, the worst thing that could happen to charity is to get zero dollars.
However from the point of view of the individual things look different. His payoffs do NOT have a hard floor. He is fully exposed to all the risk. For him the volatility of the expected return is a bad thing.
Sorry, I don’t understand your reply.
Here’s Ben Kuhn on risk neutrality:
Do you agree with this reasoning?
Well, let’s unpack.
I’ll set up the situation with two players. We have Alice, a flesh-and-blood human who is an effective altruist (among other things—being a human she is not a paperclip maximizer). And we have Charlie the charity, an organization.
Notable differences between Alice and Charlie (besides the obvious ones) are that:
Charlie’s utility function decays (in the diminishing marginal returns sense) very slowly compared to Alice’s.
Charlie can viably be risk-neutral, while Alice is unlikely to be.
Given this I’ll posit that it’s probably fine for Charlie to maximize expected outcome and be risk-neutral. It is not fine for Alice to do this.
To formulate this in a slightly different way, it’s OK for Alice to give money to Charlie to enable it to act in the maximize-the-expected-outcome manner (e.g. as a philanthropic VC) but it’s not OK for Alice to run her entire life this way.