I’d love a discussion of finance accessible to a smart but finance-phobic lay audience (raises hand).
For example, I too have been under the impression that I diversify my investments, not because I’m concerned that my investible income will drive some specific company over an inflection point in its utility function, but because it makes me less vulnerable to a single point of failure. ( Indeed, I’m still under that impression; the alternative seems absurd on the face of it. )
I think the point is that investments pay a return to you, so a single point of failure really hurts you where it counts.
Charities, on the other hand, pay their return to the world. The world is not horribly damaged if a single charity fails; the world is served by many charities. In effect, the world is already diversified.
If the charity you gave all your donations to fails, you may feel bad, but you will get over it. Not necessarily the case if the investment you sunk all your own money into fails.
Sure, I get that. The thing I was responding to was jsalvatier’s comment that “It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth.” S/he wasn’t talking about charity there, I don’t think… though maybe I was confused about that.
Two investment strategies, S1 & S2. They have the same average expected ROI, but S1 involves investing all my money in a single highly speculative company with a wider expected variance… my investment might go up or down by an order of magnitude. So, S1 suffers from a single point of failure relative to S2.
You’re saying that I could just as readily express this by saying “S1 involves diminishing marginal utility of wealth relative to S2”… yes?
Huh. I conclude that I haven’t been understanding this conversation from the git-go. In my defense, I did describe myself as finance-phobic to begin with.
No—what’s under question is the scaling behavior of your own utility function wrt money; if you exhibit diminishing marginal utility of wealth, that means you want to avoid S1.
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.
I’d love a discussion of finance accessible to a smart but finance-phobic lay audience (raises hand).
For example, I too have been under the impression that I diversify my investments, not because I’m concerned that my investible income will drive some specific company over an inflection point in its utility function, but because it makes me less vulnerable to a single point of failure. ( Indeed, I’m still under that impression; the alternative seems absurd on the face of it. )
I think the point is that investments pay a return to you, so a single point of failure really hurts you where it counts.
Charities, on the other hand, pay their return to the world. The world is not horribly damaged if a single charity fails; the world is served by many charities. In effect, the world is already diversified.
If the charity you gave all your donations to fails, you may feel bad, but you will get over it. Not necessarily the case if the investment you sunk all your own money into fails.
Sure, I get that. The thing I was responding to was jsalvatier’s comment that “It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth.” S/he wasn’t talking about charity there, I don’t think… though maybe I was confused about that.
“Diminishing marginal utility of wealth” means the same thing as “don’t want to be exposed to a single point of failure”.
Yes, I think we do need a series on econ/finance.
(blink)
Two investment strategies, S1 & S2. They have the same average expected ROI, but S1 involves investing all my money in a single highly speculative company with a wider expected variance… my investment might go up or down by an order of magnitude. So, S1 suffers from a single point of failure relative to S2.
You’re saying that I could just as readily express this by saying “S1 involves diminishing marginal utility of wealth relative to S2”… yes?
Huh. I conclude that I haven’t been understanding this conversation from the git-go. In my defense, I did describe myself as finance-phobic to begin with.
No—what’s under question is the scaling behavior of your own utility function wrt money; if you exhibit diminishing marginal utility of wealth, that means you want to avoid S1.
Isn’t anything worth doing worth troubleshooting for a single point of failure?
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
?
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
No.
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.