I think I agree with you. There’s a lot of messiness with using ^U and often I’m sure that this approximation leads to decision errors in many real cases. I’d also agree that better approximations of ^U would be costly and are often not worth the effort.
Similar to how there’s a term for “Expected value of perfect information”, there could be an equivalent for the expected value of a utility function, even outside of uncertainty of parameterized that were thought to be included. Really, there could be calculations for “expected benefit from improvements to a model”, though of course this would be difficult to parameterize (how would you declare that a model has been changed a lot vs. a little? If I introduce 2 new parameters, but these parameters aren’t that important, then how big of a deal should this be considered in expectation?)
The model has changed when the decisions it is used to make change. If the model ‘reverses’ and suggests doing the opposite/something different in every case from what it previously recommended, then it has ‘completely changed’.
(This might be roughly the McNamara fallacy, of declaring that things that ‘can’t be measured’ aren’t important.)
EDIT: Also, if there’s a set of information consisting of a bunch of pieces, A, B, and C, and incorporating all but one of them doesn’t have a big impact on the model, but the last piece does, whichever piece that is, ‘this metric’ could lead to overestimating the importance of whichever piece happened to be last, when it’s A, B, and C together that made an impact. It ‘has this issue’ because the metric by itself is meant to notice ‘changes in the model over time’, not figure out why/solve attribution.
I think I agree with you. There’s a lot of messiness with using ^U and often I’m sure that this approximation leads to decision errors in many real cases. I’d also agree that better approximations of ^U would be costly and are often not worth the effort.
Similar to how there’s a term for “Expected value of perfect information”, there could be an equivalent for the expected value of a utility function, even outside of uncertainty of parameterized that were thought to be included. Really, there could be calculations for “expected benefit from improvements to a model”, though of course this would be difficult to parameterize (how would you declare that a model has been changed a lot vs. a little? If I introduce 2 new parameters, but these parameters aren’t that important, then how big of a deal should this be considered in expectation?)
The model has changed when the decisions it is used to make change. If the model ‘reverses’ and suggests doing the opposite/something different in every case from what it previously recommended, then it has ‘completely changed’.
(This might be roughly the McNamara fallacy, of declaring that things that ‘can’t be measured’ aren’t important.)
EDIT: Also, if there’s a set of information consisting of a bunch of pieces, A, B, and C, and incorporating all but one of them doesn’t have a big impact on the model, but the last piece does, whichever piece that is, ‘this metric’ could lead to overestimating the importance of whichever piece happened to be last, when it’s A, B, and C together that made an impact. It ‘has this issue’ because the metric by itself is meant to notice ‘changes in the model over time’, not figure out why/solve attribution.