to think about it, though, maybe a great deal of people have the kind of utility that you described here, the utility calculated from possession of items.
I think this is a key point. Our brains seem prone to a dynamic where we assign some attribute to our representation of a thing in a way that makes sense in the short term (e.g., valuing money), but we then fail to entirely re-initialize that assignment when we’re done doing whatever we were doing, so over time our instrumental goals take on a terminal value of their own. Theories involving clear-cut lines between terminal and instrumental goals consequently don’t describe actual human behavior terribly well.
Yes, absolutely. And I imagine that there’s a great variety in human behaviours. I don’t really assign utility to money, as much as foresee that more money allow for higher quality of life if certain conditions are met (and that involves ultimately exchanging the money for things, and the inclination to exchange is not a result of some very slight difference in ‘utility’ of possessions).
Consider playing chess… the effective chess program may assign some simplistic utilities to pieces and particular patterns, which it evaluates in the best-play-of-opponent near future states that it foresees.
It, however, does not do knight1.utility=(utility of victory); for some knight when that knight is critical for the inevitable checkmate (victory) it foresees. There’s no point in that, it’s going to use this knight correctly without that utility adjustment. If it would do that adjustment, it would have a bug when the checkmate involves sacrifice of that knight (or ‘exchange’ of that knight for a bishop). Some people may have that bug, some may not. I think it is better not to focus on how ‘people’ on average work but on the diversity of human behaviour and efficacy of different strategies.
I think this is a key point. Our brains seem prone to a dynamic where we assign some attribute to our representation of a thing in a way that makes sense in the short term (e.g., valuing money), but we then fail to entirely re-initialize that assignment when we’re done doing whatever we were doing, so over time our instrumental goals take on a terminal value of their own. Theories involving clear-cut lines between terminal and instrumental goals consequently don’t describe actual human behavior terribly well.
Yes, absolutely. And I imagine that there’s a great variety in human behaviours. I don’t really assign utility to money, as much as foresee that more money allow for higher quality of life if certain conditions are met (and that involves ultimately exchanging the money for things, and the inclination to exchange is not a result of some very slight difference in ‘utility’ of possessions).
Consider playing chess… the effective chess program may assign some simplistic utilities to pieces and particular patterns, which it evaluates in the best-play-of-opponent near future states that it foresees.
It, however, does not do knight1.utility=(utility of victory); for some knight when that knight is critical for the inevitable checkmate (victory) it foresees. There’s no point in that, it’s going to use this knight correctly without that utility adjustment. If it would do that adjustment, it would have a bug when the checkmate involves sacrifice of that knight (or ‘exchange’ of that knight for a bishop). Some people may have that bug, some may not. I think it is better not to focus on how ‘people’ on average work but on the diversity of human behaviour and efficacy of different strategies.