Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.
Isn’t anything worth doing worth troubleshooting for a single point of failure?
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
?
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
No.
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.