Robin is correct. Here is an accessible explanation. Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity. In other words, you’d have to believe that your first donation made a dent into the FAI problem, and hence lowered the marginal utility of a MIRI dollar by enough to make another charity come out on top. But your individual contributions can’t make any such dent.
Some sensible reasons for splitting donations involve donations at different times (changes in room for more funding, etc.) and donations that are highly correlated with many other people’s donations (e.g. the people giving to GiveWell top charities) and might therefore actually make dents.
Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity.
You’re assuming you’re certain about your estimates of the charities’ marginal utility. If you’re uncertain about them, things change.
Compare this to investing in financial markets. Why don’t you invest all your money in a single asset with the highest return? Because you’re uncertain about returns and diversification is a useful thing to manage your risk.
I don’t see being risk-neutral with respect to altruism as obvious. If it turns out that you misallocated your charity dollars, you have incurred opportunity costs. In general, people are nor risk-neutral with respect to things they care about.
Well, you’re probably less risk averse with regard to altruism. I imagine most people would still be upset to see the charity they’ve been donating to for years go under.
No, I’m not relying on that assumption, though I admit I was not clear about this. The argument goes through perfectly well if we consider expected marginal utilities.
Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter. For FAI or world poverty, none of your donations at a given time will make enough of a dent.
I think the countervailing intuition comes from two sources: 1) Even when instructed about the definition of utility, certainty equivalents of gambles, and so on, people have a persistent intuition that utility has declining marginal utility. 2) We care not only about poor people being made better off (where our donations can’t make a dent) but also about creating a feeling of moral satisfaction within ourselves (where donations to a particular cause can satiate that feeling, leading us to want to help some other folks, or cute puppies).
Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter.
You are wrong about this. See e.g. here or in a slightly longer version here.
But let’s see how does your intuition work. Charity A is an established organization with well-known business practices and for years it steadily has been generating about 1 QALY for $1. Charity B is a newcomer that no one really knows much about. As far as you can tell, $1 given to it has a 1.1% chance to generate 100 QALYs or be wasted otherwise, but you’re not sure about these numbers, they are just a low-credence guess. To whom do you donate?
Adding imprecise probability (a 1.1% credence that I’m not sure of) takes us a bit afield, I think. Imprecise probability doesn’t have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I’m calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I’ll donate to charity B. I don’t know how else to make this decision. I’m curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go “all in” on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you’re maximizing something other than expected QALYs.
(I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)
I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense.
Gelman’s point has nothing to do with whether undergrads have any financial sense or not. Gelman’s point is that treating risk aversion as solely a function of the curvature of the utility function makes no sense whatsoever—for all humans.
Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by “I have no idea” is that there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.
You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values.
Correct.
there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence.
Really? Keep in mind that in reality people make decisions on the basis of “imprecise probabilities” all the time. In fact, outside of controlled experiments, it’s quite unusual to know the precise probability because real-life processes are, generally speaking, not that stable.
It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion.
On the contrary, I believe it’s very illuminating to apply these concepts to the topic under discussion.
I did mention finance which is a useful example because it’s a field where people deal with imprecise probabilities all the time and the outcomes of their decisions are both very clear and very motivating. You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
There are two very different sorts of scenarios with something like “imprecise probabilities”.
The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you’re uncertain about “the probability that the coin will come up heads” because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they’re all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.
Examples of the first sort are what gives talk about “uncertain probabilities” its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.
But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
Really? You can estimate your subjective credence without any uncertainty at all? You integration of the conditional probabilities over the range of parameter values involves only numbers you are fully certain about?
I don’t believe you.
Approximately none of the decision theory corpus applies to this case
So this decision theory corpus is crippled and not very useful. Why should we care much about it?
So decision theory with imprecise credence is currently unsolved.
Yes, of course, but life in general is “unsolved” and you need to make decisions on a daily basis, not waiting for a proper decision theory to mature.
I think you overestimate the degree to which abstractions are useful when applied to reality.
The fact that the assumptions of an incredibly useful theory of rational decisionmaking turn out not to be perfectly satisfied does not imply that we get to ignore the theory. If we want to do seemingly crazy things like diversifying charitable donations, we need an actual positive reason, such as the prescriptions of a better model of decisionmaking that can handle the complications. Just going with our intuition that we should “diversify” to “reduce risk”, when we know that those intuitions are influenced by well-documented cognitive biases, is crazy.
This has been incredibly unproductive I can’t believe I’m still talking to you kthxbai
Ignore the last sentence and take the rest for what it’s worth :) I did the equivalent of somewhat tactlessly throwing up my hands after concluding that the exchange stopped being productive (for me at least, if not for spectators) a while ago.
You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
Such an expression usually implies a normal probability distribution with the given mean and standard deviation. How do you understand probabilities as applied to continuous variables?
Robin is correct. Here is an accessible explanation. Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity. In other words, you’d have to believe that your first donation made a dent into the FAI problem, and hence lowered the marginal utility of a MIRI dollar by enough to make another charity come out on top. But your individual contributions can’t make any such dent.
Some sensible reasons for splitting donations involve donations at different times (changes in room for more funding, etc.) and donations that are highly correlated with many other people’s donations (e.g. the people giving to GiveWell top charities) and might therefore actually make dents.
You’re assuming you’re certain about your estimates of the charities’ marginal utility. If you’re uncertain about them, things change.
Compare this to investing in financial markets. Why don’t you invest all your money in a single asset with the highest return? Because you’re uncertain about returns and diversification is a useful thing to manage your risk.
But presumably you’re risk-neutral to altruism, but not risk-neutral for your own personal finances.
I don’t see being risk-neutral with respect to altruism as obvious. If it turns out that you misallocated your charity dollars, you have incurred opportunity costs. In general, people are nor risk-neutral with respect to things they care about.
Well, you’re probably less risk averse with regard to altruism. I imagine most people would still be upset to see the charity they’ve been donating to for years go under.
No, I’m not relying on that assumption, though I admit I was not clear about this. The argument goes through perfectly well if we consider expected marginal utilities.
Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter. For FAI or world poverty, none of your donations at a given time will make enough of a dent.
I think the countervailing intuition comes from two sources: 1) Even when instructed about the definition of utility, certainty equivalents of gambles, and so on, people have a persistent intuition that utility has declining marginal utility. 2) We care not only about poor people being made better off (where our donations can’t make a dent) but also about creating a feeling of moral satisfaction within ourselves (where donations to a particular cause can satiate that feeling, leading us to want to help some other folks, or cute puppies).
You are wrong about this. See e.g. here or in a slightly longer version here.
But let’s see how does your intuition work. Charity A is an established organization with well-known business practices and for years it steadily has been generating about 1 QALY for $1. Charity B is a newcomer that no one really knows much about. As far as you can tell, $1 given to it has a 1.1% chance to generate 100 QALYs or be wasted otherwise, but you’re not sure about these numbers, they are just a low-credence guess. To whom do you donate?
Adding imprecise probability (a 1.1% credence that I’m not sure of) takes us a bit afield, I think. Imprecise probability doesn’t have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I’m calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I’ll donate to charity B. I don’t know how else to make this decision. I’m curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go “all in” on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you’re maximizing something other than expected QALYs.
(I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)
If you’re truly risk neutral you would discount all uncertainty to zero, the expected value is all that you’d care about.
You introspection tells you that you’re uncertain. Your best guess is 1.1 but it’s just a guess. The uncertainty is very high.
Oh, there are plenty of ways, just look at finance. Here’s a possible starting point.
Gelman’s point has nothing to do with whether undergrads have any financial sense or not. Gelman’s point is that treating risk aversion as solely a function of the curvature of the utility function makes no sense whatsoever—for all humans.
Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by “I have no idea” is that there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.
Correct.
Really? Keep in mind that in reality people make decisions on the basis of “imprecise probabilities” all the time. In fact, outside of controlled experiments, it’s quite unusual to know the precise probability because real-life processes are, generally speaking, not that stable.
On the contrary, I believe it’s very illuminating to apply these concepts to the topic under discussion.
I did mention finance which is a useful example because it’s a field where people deal with imprecise probabilities all the time and the outcomes of their decisions are both very clear and very motivating. You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
There are two very different sorts of scenarios with something like “imprecise probabilities”.
The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you’re uncertain about “the probability that the coin will come up heads” because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they’re all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.
Examples of the first sort are what gives talk about “uncertain probabilities” its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.
Really? You can estimate your subjective credence without any uncertainty at all? You integration of the conditional probabilities over the range of parameter values involves only numbers you are fully certain about?
I don’t believe you.
So this decision theory corpus is crippled and not very useful. Why should we care much about it?
Yes, of course, but life in general is “unsolved” and you need to make decisions on a daily basis, not waiting for a proper decision theory to mature.
I think you overestimate the degree to which abstractions are useful when applied to reality.
The fact that the assumptions of an incredibly useful theory of rational decisionmaking turn out not to be perfectly satisfied does not imply that we get to ignore the theory. If we want to do seemingly crazy things like diversifying charitable donations, we need an actual positive reason, such as the prescriptions of a better model of decisionmaking that can handle the complications. Just going with our intuition that we should “diversify” to “reduce risk”, when we know that those intuitions are influenced by well-documented cognitive biases, is crazy.
This has been incredibly unproductive I can’t believe I’m still talking to you kthxbai
Ah.
Thank you for clarity.
I’m not sure what I’m should take away from that exchange.
Ignore the last sentence and take the rest for what it’s worth :) I did the equivalent of somewhat tactlessly throwing up my hands after concluding that the exchange stopped being productive (for me at least, if not for spectators) a while ago.
Anything in particular you are wondering about? :-)
Just my original question. I’m not sure if diversification to mitigate charitable risk is a matter of preference or numeric objectivity.
Try making up your own mind..? :-)
Someone told me not to.
(this is a joke)
At this point you’re supposed to fry your circuits and ’splode.
Those are not even probabilities at all.
Such an expression usually implies a normal probability distribution with the given mean and standard deviation. How do you understand probabilities as applied to continuous variables?