In real life, the problem with metrics is that if you don’t make it perfectly right (which is difficult), you can easily get something useless, often even actively harmful.
And yet, metrics often are useful in real life. You generally want to measure things. You need to know how much money you have, and it is better to know in detail the structure of your incomes and expenses. If you want to e.g. exercise regularly or stop eating chocolate, keeping a log of which days you exercised or avoided the chocolate is often a good first step.
Thus we find ourselves in a paradox that we need good metrics, but we need to remember that they are mere approximations of reality, lest we start optimizing for the metrics at the expense of the real things. (Good advice for a human, not very useful for constructing the AI.)
Evolutionary, I guess human/animal utility function would be something like “How many copies of myself have I made? Let’s maximize that.” But from the subjective perspective, it’s probably more like “Am I receiving the pleasure from the reward system my brain happened to develop?”
Yes, the “utility” of evolution is not the same as that of the evolved human.
For sure there are a bunch of different impulses/drives, but they all are just little rewards for transforming the current state of the world into the one our brain prefers, right?
Sometimes following your impulse can make you unhappy and still on average increase your fitness, for example jealousy. (Jealous people are made less happy by the idea that their partners might be cheating on them. But feeling this discomfort and guarding one’s partner increases the reproductive fitness in average.) I mean, yes, finding out that despite your suspicions your partner does not cheat on you makes you more happy (or less unhappy) than finding out that they actually do. But not worrying about the possibility would make you even more happy. Humans are instinctively not even happiness maximizers.
In real life, the problem with metrics is that if you don’t make it perfectly right (which is difficult), you can easily get something useless, often even actively harmful.
And yet, metrics often are useful in real life. You generally want to measure things. You need to know how much money you have, and it is better to know in detail the structure of your incomes and expenses. If you want to e.g. exercise regularly or stop eating chocolate, keeping a log of which days you exercised or avoided the chocolate is often a good first step.
Thus we find ourselves in a paradox that we need good metrics, but we need to remember that they are mere approximations of reality, lest we start optimizing for the metrics at the expense of the real things. (Good advice for a human, not very useful for constructing the AI.)
Yes, the “utility” of evolution is not the same as that of the evolved human.
Sometimes following your impulse can make you unhappy and still on average increase your fitness, for example jealousy. (Jealous people are made less happy by the idea that their partners might be cheating on them. But feeling this discomfort and guarding one’s partner increases the reproductive fitness in average.) I mean, yes, finding out that despite your suspicions your partner does not cheat on you makes you more happy (or less unhappy) than finding out that they actually do. But not worrying about the possibility would make you even more happy. Humans are instinctively not even happiness maximizers.