Say I have the option to purchase a lottery ticket with a 1/10m chance of winning, a $1m guaranteed jackpot, and a $1 ticket price. The EV of a ticket is $1m/10m - $1 = -$0.9, a bad deal. What is the value of information that narrows the odds?
If it narrowed the odds all the way to 1⁄1, the value would be $999,999, obviously a very good deal. I should be willing to pay up to almost $999,999 for it.
But if the odds are doubled? The EV of a purchase is now $1/5-$1 = -$0.80. In one sense I’ve gained $0.1. But since buying a lottery ticket is still EV < 0, I won’t purchase one, and so the information nets me no money. The information is worth $0 to me, as long as I have the option not to buy a ticket.
But… maybe there are multiple interventions that double my odds? Under the above rule, the first three are worth nothing and the fourth is worth $2^4/10 - $1 = $0.6. It seems wrong that 4 identical interventions would have such different values. I could just divide .6 by 4, but what if I’m not certain how many doublings will be available?
Interesting point. I think this is more a question of counterfactual and Shapely values than VOI, as the concern is really a broader one. Similar how to if 4 interventions are all needed to save a life, you can’t just consider the counterfactual impact of the last one.
Most VOI analyses are highly simplified to get around issues like this.
If the interventions have aggregate impact distinct from individual impact, they’re by definition not identical. There are lots of things that are valuable only in conjunction with other things, or have threshold values for quantities—it doesn’t seem problematic to say “any of these interventions is valueless, but all 4 together are worth $0.6 per $1 I have to invest”.
It’s not an important point—the value of aggregates being different from a simple sum of the components is mostly what I meant to say.
The interventions may be identical for some kinds of measurement, that is, they “feel” the same to the user. But they’re not actually identical if they’re applied to different states of the world and have different results from each other.
“Rationality” is pretending you can model (despite) uncertainty, and making a model.
‘If you start with a prior’ sounds great—unless you don’t have a prior.
I’m going to take the easy way out and say that if you did have a prior, then maybe upon finding there is such an ‘intervention’, since that’s probably more likely in worlds where there are more interventions*...and I just realized the value of information problem is both object level and meta.
VALUE OF INFORMATION
Say I have the option to purchase a lottery ticket with a 1/10m chance of winning, a $1m guaranteed jackpot, and a $1 ticket price. The EV of a ticket is $1m/10m - $1 = -$0.9, a bad deal. What is the value of information that narrows the odds?
If it narrowed the odds all the way to 1⁄1, the value would be $999,999, obviously a very good deal. I should be willing to pay up to almost $999,999 for it.
But if the odds are doubled? The EV of a purchase is now $1/5-$1 = -$0.80. In one sense I’ve gained $0.1. But since buying a lottery ticket is still EV < 0, I won’t purchase one, and so the information nets me no money. The information is worth $0 to me, as long as I have the option not to buy a ticket.
But… maybe there are multiple interventions that double my odds? Under the above rule, the first three are worth nothing and the fourth is worth $2^4/10 - $1 = $0.6. It seems wrong that 4 identical interventions would have such different values. I could just divide .6 by 4, but what if I’m not certain how many doublings will be available?
Interesting point. I think this is more a question of counterfactual and Shapely values than VOI, as the concern is really a broader one. Similar how to if 4 interventions are all needed to save a life, you can’t just consider the counterfactual impact of the last one.
Most VOI analyses are highly simplified to get around issues like this.
If the interventions have aggregate impact distinct from individual impact, they’re by definition not identical. There are lots of things that are valuable only in conjunction with other things, or have threshold values for quantities—it doesn’t seem problematic to say “any of these interventions is valueless, but all 4 together are worth $0.6 per $1 I have to invest”.
Why not? I don’t see how that follows.
It’s not an important point—the value of aggregates being different from a simple sum of the components is mostly what I meant to say.
The interventions may be identical for some kinds of measurement, that is, they “feel” the same to the user. But they’re not actually identical if they’re applied to different states of the world and have different results from each other.
“Rationality” is pretending you can model (despite) uncertainty, and making a model.
‘If you start with a prior’ sounds great—unless you don’t have a prior.
I’m going to take the easy way out and say that if you did have a prior, then maybe upon finding there is such an ‘intervention’, since that’s probably more likely in worlds where there are more interventions*...and I just realized the value of information problem is both object level and meta.
*Then you update, in theory.
Is this commenting on the wrong post?
Indeed, a weird bug made this shortform post show up as a comment on this post. Will fix it.