Under the standard setting, the optimizer’s curse only changes your naive estimate of the EV of the action you choose. It does not change the naive decision you make. So, it is not valid to use the optimizer’s curse as a critique of people who use EV calculations to make decisions, but it is valid to use it as a critique of people who make claims about the EV calculations of their most preferred outcome (if they don’t already account for it).
This is true if “the standard setting” refers to one where you have equally robust evidence of all options. But if you have more robust evidence about some options (which is common), the optimizer’s curse will especially distort estimates of options with less robust evidence. A correct bayesian treatment would then systematically push you towards picking options with more robust evidence.
(Where I’m using “more robust evidence” to mean something like: evidence that has an overall greater likelihood ratio, and that therefore pushes you further from the prior. Where the error driving the optimizer’s curse error is to look at the peak of the likelihood function while neglecting the prior and how much the likelihood ratio pushes you away from it.)
(In practice I think it was rare that people appealed to the robustness of evidence when citing the optimizer’s curse, though nowadays I mostly don’t hear it cited at all.)
Under the standard setting, the optimizer’s curse only changes your naive estimate of the EV of the action you choose. It does not change the naive decision you make. So, it is not valid to use the optimizer’s curse as a critique of people who use EV calculations to make decisions, but it is valid to use it as a critique of people who make claims about the EV calculations of their most preferred outcome (if they don’t already account for it).
This is true if “the standard setting” refers to one where you have equally robust evidence of all options. But if you have more robust evidence about some options (which is common), the optimizer’s curse will especially distort estimates of options with less robust evidence. A correct bayesian treatment would then systematically push you towards picking options with more robust evidence.
(Where I’m using “more robust evidence” to mean something like: evidence that has an overall greater likelihood ratio, and that therefore pushes you further from the prior. Where the error driving the optimizer’s curse error is to look at the peak of the likelihood function while neglecting the prior and how much the likelihood ratio pushes you away from it.)
Agreed.
(In practice I think it was rare that people appealed to the robustness of evidence when citing the optimizer’s curse, though nowadays I mostly don’t hear it cited at all.)