They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
Genes built our desires, but their “purposes” in doing so are not identical to those desires.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
Maybe—depending on which parts of yourself you most identify with.
There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Lately, though, the rate of evolution of the memes may be leaving the genes in the dust.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Maybe—depending on which parts of yourself you most identify with.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
I just mean the cybernetic agent-environment framework with a reward/utility signal. For example, see page 1 of Hibbard’s recent paper, page 5 of UNIVERSAL ALGORITHMIC INTELLIGENCE A mathematical top!down approach, or page 39 of Machine Super Intelligence.
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
Pretty much, but I think not totally. But we’ve gone far enough afield already. I’ll note this as a possible topic for a future discussion post.