You got the basic idea across, which is a big deal.
Though whether it’s A or B isn’t clear:
A) “this isn’t all of the utility function, but its everything that’s relevant to making decisions about this right now”. ^U doesn’t have to be U, or even a good approximation in every situation—just (good enough) in the situations we use it.
Building a building? A desire for things to not fall on people’s heads becomes relevant (and knowledge of how to do that).
Writing a program that writes programs? It’d be nice if it didn’t produce malware.
Both desires usually exist—and usually aren’t relevant. Models of utility for most situations won’t include them.
B) The cost of computing the utility function more exactly in the case exceeds the (expected) gains.
I think I agree with you. There’s a lot of messiness with using ^U and often I’m sure that this approximation leads to decision errors in many real cases. I’d also agree that better approximations of ^U would be costly and are often not worth the effort.
Similar to how there’s a term for “Expected value of perfect information”, there could be an equivalent for the expected value of a utility function, even outside of uncertainty of parameterized that were thought to be included. Really, there could be calculations for “expected benefit from improvements to a model”, though of course this would be difficult to parameterize (how would you declare that a model has been changed a lot vs. a little? If I introduce 2 new parameters, but these parameters aren’t that important, then how big of a deal should this be considered in expectation?)
The model has changed when the decisions it is used to make change. If the model ‘reverses’ and suggests doing the opposite/something different in every case from what it previously recommended, then it has ‘completely changed’.
(This might be roughly the McNamara fallacy, of declaring that things that ‘can’t be measured’ aren’t important.)
EDIT: Also, if there’s a set of information consisting of a bunch of pieces, A, B, and C, and incorporating all but one of them doesn’t have a big impact on the model, but the last piece does, whichever piece that is, ‘this metric’ could lead to overestimating the importance of whichever piece happened to be last, when it’s A, B, and C together that made an impact. It ‘has this issue’ because the metric by itself is meant to notice ‘changes in the model over time’, not figure out why/solve attribution.
^U and ^U look to be the same.
Thanks! Fixed.
I’m sure the bottom notation could be improved, but am not sure the best way. In general I’m trying to get better at this kind of mathematics.
You got the basic idea across, which is a big deal.
Though whether it’s A or B isn’t clear:
A) “this isn’t all of the utility function, but its everything that’s relevant to making decisions about this right now”. ^U doesn’t have to be U, or even a good approximation in every situation—just (good enough) in the situations we use it.
Building a building? A desire for things to not fall on people’s heads becomes relevant (and knowledge of how to do that).
Writing a program that writes programs? It’d be nice if it didn’t produce malware.
Both desires usually exist—and usually aren’t relevant. Models of utility for most situations won’t include them.
B) The cost of computing the utility function more exactly in the case exceeds the (expected) gains.
isn’t clear.
I think I agree with you. There’s a lot of messiness with using ^U and often I’m sure that this approximation leads to decision errors in many real cases. I’d also agree that better approximations of ^U would be costly and are often not worth the effort.
Similar to how there’s a term for “Expected value of perfect information”, there could be an equivalent for the expected value of a utility function, even outside of uncertainty of parameterized that were thought to be included. Really, there could be calculations for “expected benefit from improvements to a model”, though of course this would be difficult to parameterize (how would you declare that a model has been changed a lot vs. a little? If I introduce 2 new parameters, but these parameters aren’t that important, then how big of a deal should this be considered in expectation?)
The model has changed when the decisions it is used to make change. If the model ‘reverses’ and suggests doing the opposite/something different in every case from what it previously recommended, then it has ‘completely changed’.
(This might be roughly the McNamara fallacy, of declaring that things that ‘can’t be measured’ aren’t important.)
EDIT: Also, if there’s a set of information consisting of a bunch of pieces, A, B, and C, and incorporating all but one of them doesn’t have a big impact on the model, but the last piece does, whichever piece that is, ‘this metric’ could lead to overestimating the importance of whichever piece happened to be last, when it’s A, B, and C together that made an impact. It ‘has this issue’ because the metric by itself is meant to notice ‘changes in the model over time’, not figure out why/solve attribution.