So, as in most such problems, there’s an important difference between the epistemological question (“should I pay, given what I know?”) and the more fundamental question (“should I pay, supposing this description is accurate?”). Between expected value and actual value, in other words.
It’s easy to get those confused, and my intuitions about one muddy my thinking about the other, so I like to think about them separately.
WRT the epistemological question, that’s hard to answer without a lot of information about how likely I consider accurate oracular ability, how confident I am that the examples of accurate prediction I’m aware of are a representative sample, etc. etc. etc., all of which I think is both uncontroversial and uninteresting. Vaguely approximating all of that stuff I conclude that I shouldn’t pay the oracle, because I’m not justified in being more confident that the situation really is as the oracle describes it, than that the oracle is misrepresenting the situation in some important way. My expected value of this deal in the real world is negative.
WRT the fundamental question… of course, you leave a lot of details unspecified, but I don’t want to fight the hypothetical here, so I’m assuming that the “overall jist” of your description applies: I’m paying $1K for QALYs I would not have had access to without the oracle’s offer. That’s a good deal for me; I’m inclined to take it. (Though I might try to negotiate the price down.)
The knock-on effect is that I encourage the oracle to keep making this offer… but that’s good too; I want the oracle to keep making the offer. QALYs for everyone!
So, yes, I should pay the oracle, though I should also implement decision procedures that will lead me to not pay the oracle.
The knock-on effect is that I encourage the oracle to keep making this offer… but that’s good too; I want the oracle to keep making the offer. QALYs for everyone!
I think a key part of the question, as I see it, is to formalize the difference between treatment effects and selection effects (in the context where your actions might reflect a selection effect, and we can’t make the normally reasonable assumption that our actions result in treatment effects). An oracle could look into the future, find a list of people who will die in the next week, and a list of people who would pay them $1000 if presented with this prompt, and present the prompt to the exclusive or of those two lists. This doesn’t give anyone QALYs they wouldn’t have had otherwise.
And so I find my intuitions are guided mostly by the identification of the prompter as an “oracle” instead of a “wizard” or “witch.” Oracle implies selection effect; wizard or witch implies treatment effect.
Leaving aside lexical questions about the connotations of the word “oracle”, I certainly agree that if the entity’s accuracy represents a selection effect, then my reasoning doesn’t hold.
Indeed, I at least intended to say as much explicitly (“I don’t want to fight the hypothetical here, so I’m assuming that the “overall jist” of your description applies: I’m paying $1K for QALYs I would not have had access to without the oracle’s offer.” ) in my comment.
That said, it’s entirely possible that I misread what the point of DanielLC’s hypothetical was.
They just go around and find people who will either give them money or die in the near future, and tell them that.
I interpreted that as a selection effect, so my answer recommended not paying. Now I realize that it may not be entirely a selection effect. Maybe the oracle is also finding people whose life would be saved by making them $1000 poorer, for various exotic reasons. But if the probability of that is small enough, my answer stays the same.
Right. Your reading is entirely sensible, and more likely in “the real world” (by which I mean something not-well-thought-through about how it’s easier to implement the original description as a selection effect), I merely chose to bypass that reading and go with what I suspected (perhaps incorrectly) the OP actually had in mind.
So, as in most such problems, there’s an important difference between the epistemological question (“should I pay, given what I know?”) and the more fundamental question (“should I pay, supposing this description is accurate?”). Between expected value and actual value, in other words.
It’s easy to get those confused, and my intuitions about one muddy my thinking about the other, so I like to think about them separately.
WRT the epistemological question, that’s hard to answer without a lot of information about how likely I consider accurate oracular ability, how confident I am that the examples of accurate prediction I’m aware of are a representative sample, etc. etc. etc., all of which I think is both uncontroversial and uninteresting. Vaguely approximating all of that stuff I conclude that I shouldn’t pay the oracle, because I’m not justified in being more confident that the situation really is as the oracle describes it, than that the oracle is misrepresenting the situation in some important way. My expected value of this deal in the real world is negative.
WRT the fundamental question… of course, you leave a lot of details unspecified, but I don’t want to fight the hypothetical here, so I’m assuming that the “overall jist” of your description applies: I’m paying $1K for QALYs I would not have had access to without the oracle’s offer. That’s a good deal for me; I’m inclined to take it. (Though I might try to negotiate the price down.)
The knock-on effect is that I encourage the oracle to keep making this offer… but that’s good too; I want the oracle to keep making the offer. QALYs for everyone!
So, yes, I should pay the oracle, though I should also implement decision procedures that will lead me to not pay the oracle.
I think a key part of the question, as I see it, is to formalize the difference between treatment effects and selection effects (in the context where your actions might reflect a selection effect, and we can’t make the normally reasonable assumption that our actions result in treatment effects). An oracle could look into the future, find a list of people who will die in the next week, and a list of people who would pay them $1000 if presented with this prompt, and present the prompt to the exclusive or of those two lists. This doesn’t give anyone QALYs they wouldn’t have had otherwise.
And so I find my intuitions are guided mostly by the identification of the prompter as an “oracle” instead of a “wizard” or “witch.” Oracle implies selection effect; wizard or witch implies treatment effect.
Leaving aside lexical questions about the connotations of the word “oracle”, I certainly agree that if the entity’s accuracy represents a selection effect, then my reasoning doesn’t hold.
Indeed, I at least intended to say as much explicitly (“I don’t want to fight the hypothetical here, so I’m assuming that the “overall jist” of your description applies: I’m paying $1K for QALYs I would not have had access to without the oracle’s offer.” ) in my comment.
That said, it’s entirely possible that I misread what the point of DanielLC’s hypothetical was.
DanielLC said:
I interpreted that as a selection effect, so my answer recommended not paying. Now I realize that it may not be entirely a selection effect. Maybe the oracle is also finding people whose life would be saved by making them $1000 poorer, for various exotic reasons. But if the probability of that is small enough, my answer stays the same.
Right. Your reading is entirely sensible, and more likely in “the real world” (by which I mean something not-well-thought-through about how it’s easier to implement the original description as a selection effect), I merely chose to bypass that reading and go with what I suspected (perhaps incorrectly) the OP actually had in mind.