Adding imprecise probability (a 1.1% credence that I’m not sure of) takes us a bit afield, I think. Imprecise probability doesn’t have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I’m calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I’ll donate to charity B. I don’t know how else to make this decision. I’m curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go “all in” on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you’re maximizing something other than expected QALYs.
(I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)
I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense.
Gelman’s point has nothing to do with whether undergrads have any financial sense or not. Gelman’s point is that treating risk aversion as solely a function of the curvature of the utility function makes no sense whatsoever—for all humans.
Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by “I have no idea” is that there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.
You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values.
Correct.
there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence.
Really? Keep in mind that in reality people make decisions on the basis of “imprecise probabilities” all the time. In fact, outside of controlled experiments, it’s quite unusual to know the precise probability because real-life processes are, generally speaking, not that stable.
It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion.
On the contrary, I believe it’s very illuminating to apply these concepts to the topic under discussion.
I did mention finance which is a useful example because it’s a field where people deal with imprecise probabilities all the time and the outcomes of their decisions are both very clear and very motivating. You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
There are two very different sorts of scenarios with something like “imprecise probabilities”.
The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you’re uncertain about “the probability that the coin will come up heads” because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they’re all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.
Examples of the first sort are what gives talk about “uncertain probabilities” its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.
But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
Really? You can estimate your subjective credence without any uncertainty at all? You integration of the conditional probabilities over the range of parameter values involves only numbers you are fully certain about?
I don’t believe you.
Approximately none of the decision theory corpus applies to this case
So this decision theory corpus is crippled and not very useful. Why should we care much about it?
So decision theory with imprecise credence is currently unsolved.
Yes, of course, but life in general is “unsolved” and you need to make decisions on a daily basis, not waiting for a proper decision theory to mature.
I think you overestimate the degree to which abstractions are useful when applied to reality.
The fact that the assumptions of an incredibly useful theory of rational decisionmaking turn out not to be perfectly satisfied does not imply that we get to ignore the theory. If we want to do seemingly crazy things like diversifying charitable donations, we need an actual positive reason, such as the prescriptions of a better model of decisionmaking that can handle the complications. Just going with our intuition that we should “diversify” to “reduce risk”, when we know that those intuitions are influenced by well-documented cognitive biases, is crazy.
This has been incredibly unproductive I can’t believe I’m still talking to you kthxbai
Ignore the last sentence and take the rest for what it’s worth :) I did the equivalent of somewhat tactlessly throwing up my hands after concluding that the exchange stopped being productive (for me at least, if not for spectators) a while ago.
You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
Such an expression usually implies a normal probability distribution with the given mean and standard deviation. How do you understand probabilities as applied to continuous variables?
Adding imprecise probability (a 1.1% credence that I’m not sure of) takes us a bit afield, I think. Imprecise probability doesn’t have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I’m calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I’ll donate to charity B. I don’t know how else to make this decision. I’m curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go “all in” on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you’re maximizing something other than expected QALYs.
(I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)
If you’re truly risk neutral you would discount all uncertainty to zero, the expected value is all that you’d care about.
You introspection tells you that you’re uncertain. Your best guess is 1.1 but it’s just a guess. The uncertainty is very high.
Oh, there are plenty of ways, just look at finance. Here’s a possible starting point.
Gelman’s point has nothing to do with whether undergrads have any financial sense or not. Gelman’s point is that treating risk aversion as solely a function of the curvature of the utility function makes no sense whatsoever—for all humans.
Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by “I have no idea” is that there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.
Correct.
Really? Keep in mind that in reality people make decisions on the basis of “imprecise probabilities” all the time. In fact, outside of controlled experiments, it’s quite unusual to know the precise probability because real-life processes are, generally speaking, not that stable.
On the contrary, I believe it’s very illuminating to apply these concepts to the topic under discussion.
I did mention finance which is a useful example because it’s a field where people deal with imprecise probabilities all the time and the outcomes of their decisions are both very clear and very motivating. You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
There are two very different sorts of scenarios with something like “imprecise probabilities”.
The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you’re uncertain about “the probability that the coin will come up heads” because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they’re all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.
Examples of the first sort are what gives talk about “uncertain probabilities” its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.
Really? You can estimate your subjective credence without any uncertainty at all? You integration of the conditional probabilities over the range of parameter values involves only numbers you are fully certain about?
I don’t believe you.
So this decision theory corpus is crippled and not very useful. Why should we care much about it?
Yes, of course, but life in general is “unsolved” and you need to make decisions on a daily basis, not waiting for a proper decision theory to mature.
I think you overestimate the degree to which abstractions are useful when applied to reality.
The fact that the assumptions of an incredibly useful theory of rational decisionmaking turn out not to be perfectly satisfied does not imply that we get to ignore the theory. If we want to do seemingly crazy things like diversifying charitable donations, we need an actual positive reason, such as the prescriptions of a better model of decisionmaking that can handle the complications. Just going with our intuition that we should “diversify” to “reduce risk”, when we know that those intuitions are influenced by well-documented cognitive biases, is crazy.
This has been incredibly unproductive I can’t believe I’m still talking to you kthxbai
Ah.
Thank you for clarity.
I’m not sure what I’m should take away from that exchange.
Ignore the last sentence and take the rest for what it’s worth :) I did the equivalent of somewhat tactlessly throwing up my hands after concluding that the exchange stopped being productive (for me at least, if not for spectators) a while ago.
Anything in particular you are wondering about? :-)
Just my original question. I’m not sure if diversification to mitigate charitable risk is a matter of preference or numeric objectivity.
Try making up your own mind..? :-)
Someone told me not to.
(this is a joke)
At this point you’re supposed to fry your circuits and ’splode.
Those are not even probabilities at all.
Such an expression usually implies a normal probability distribution with the given mean and standard deviation. How do you understand probabilities as applied to continuous variables?