I think you credit the optimizer’s curse with power that it doesn’t, as described, have. In particular, it doesn’t have the power that “people who try to optimize end up worse off than people who don’t”.
In the linked post by lukeprog, when the curse is made concrete with numbers, people who tried to optimize ended up exactly as well off as everyone else—but that’s only because by assumption, all choices were exactly the same. (“there are k choices, each of which has true estimated [expected value] of 0.”) If some choices are better than the others, then the optimizer’s curse will make the optimizer disappointed, but it will still give her better results on average than the people who failed to optimize, or who optimized less hard. (Ignoring possible actions that aren’t just “take one of these options based on the information currently available to me”.)
I’m making some assumptions about the error terms here, and I’m not sure exactly what assumptions. But I think they’re fairly weak.
(And if the difference between the actually-best choice and the actually-second best is large compared to the error terms, then the optimizer’s curse appears to have no power at all.)
There can be other things that go wrong, when one tries to optimize. With your shoes and your breakfast routine, it seems to me that you invested much effort in pursuit of a goal that was unattainable in one case and trivial in another. Unfortunate, but not the optimizer’s curse.
I wrote the above and then realised that I’m not actually sure how much you’re making the specific mistake I describe. I thought you were partly because of
attempts to optimize for a measure of success result in increased likelihood of failure to hit the desired target
Emphasis mine. But maybe the increased likelihood just comes from Goodhart’s law, here? It’s not clear to me what the optimizer’s curse is contributing to Goodhart’s curse beyond what Goodhart’s law already supplies.
I think you credit the optimizer’s curse with power that it doesn’t, as described, have. In particular, it doesn’t have the power that “people who try to optimize end up worse off than people who don’t”.
In the linked post by lukeprog, when the curse is made concrete with numbers, people who tried to optimize ended up exactly as well off as everyone else—but that’s only because by assumption, all choices were exactly the same. (“there are k choices, each of which has true estimated [expected value] of 0.”) If some choices are better than the others, then the optimizer’s curse will make the optimizer disappointed, but it will still give her better results on average than the people who failed to optimize, or who optimized less hard. (Ignoring possible actions that aren’t just “take one of these options based on the information currently available to me”.)
I’m making some assumptions about the error terms here, and I’m not sure exactly what assumptions. But I think they’re fairly weak.
(And if the difference between the actually-best choice and the actually-second best is large compared to the error terms, then the optimizer’s curse appears to have no power at all.)
There can be other things that go wrong, when one tries to optimize. With your shoes and your breakfast routine, it seems to me that you invested much effort in pursuit of a goal that was unattainable in one case and trivial in another. Unfortunate, but not the optimizer’s curse.
I wrote the above and then realised that I’m not actually sure how much you’re making the specific mistake I describe. I thought you were partly because of
Emphasis mine. But maybe the increased likelihood just comes from Goodhart’s law, here? It’s not clear to me what the optimizer’s curse is contributing to Goodhart’s curse beyond what Goodhart’s law already supplies.