I’ll set up the situation with two players. We have Alice, a flesh-and-blood human who is an effective altruist (among other things—being a human she is not a paperclip maximizer). And we have Charlie the charity, an organization.
Notable differences between Alice and Charlie (besides the obvious ones) are that:
Charlie’s utility function decays (in the diminishing marginal returns sense) very slowly compared to Alice’s.
Charlie can viably be risk-neutral, while Alice is unlikely to be.
Given this I’ll posit that it’s probably fine for Charlie to maximize expected outcome and be risk-neutral. It is not fine for Alice to do this.
To formulate this in a slightly different way, it’s OK for Alice to give money to Charlie to enable it to act in the maximize-the-expected-outcome manner (e.g. as a philanthropic VC) but it’s not OK for Alice to run her entire life this way.
Well, let’s unpack.
I’ll set up the situation with two players. We have Alice, a flesh-and-blood human who is an effective altruist (among other things—being a human she is not a paperclip maximizer). And we have Charlie the charity, an organization.
Notable differences between Alice and Charlie (besides the obvious ones) are that:
Charlie’s utility function decays (in the diminishing marginal returns sense) very slowly compared to Alice’s.
Charlie can viably be risk-neutral, while Alice is unlikely to be.
Given this I’ll posit that it’s probably fine for Charlie to maximize expected outcome and be risk-neutral. It is not fine for Alice to do this.
To formulate this in a slightly different way, it’s OK for Alice to give money to Charlie to enable it to act in the maximize-the-expected-outcome manner (e.g. as a philanthropic VC) but it’s not OK for Alice to run her entire life this way.