Humans desire fat and sugar. Those desires are built in—coded in genes.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C—probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don’t have an instinctive craving for it—perhaps because their diet is normally saturated with it.
Or perhaps humans simply desire survival and reproduction.
Sure—e.g. the maternal instinct.
I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
So: those are not really different interpretations of the same facts, but statements covering several different desires—so we don’t have to choose between them.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don’t have to choose between statements of which desires are “coded in genes”, but if we affirm too many of them we’ll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat? “Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat?
There’s little difference—since the way the genes bring about the consumprtion is via desires. FWIW, I didn’t just say “fat”, I said “fat and sugar”—and they were examples of desires—not an exhaustive list.
“Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Genes build our desires, though—in much the same way that they build our hearts and legs.
They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
Genes built our desires, but their “purposes” in doing so are not identical to those desires.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
Maybe—depending on which parts of yourself you most identify with.
There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Lately, though, the rate of evolution of the memes may be leaving the genes in the dust.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C—probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don’t have an instinctive craving for it—perhaps because their diet is normally saturated with it.
Sure—e.g. the maternal instinct.
So: those are not really different interpretations of the same facts, but statements covering several different desires—so we don’t have to choose between them.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don’t have to choose between statements of which desires are “coded in genes”, but if we affirm too many of them we’ll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat? “Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
There’s little difference—since the way the genes bring about the consumprtion is via desires. FWIW, I didn’t just say “fat”, I said “fat and sugar”—and they were examples of desires—not an exhaustive list.
Genes build our desires, though—in much the same way that they build our hearts and legs.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Maybe—depending on which parts of yourself you most identify with.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
I just mean the cybernetic agent-environment framework with a reward/utility signal. For example, see page 1 of Hibbard’s recent paper, page 5 of UNIVERSAL ALGORITHMIC INTELLIGENCE A mathematical top!down approach, or page 39 of Machine Super Intelligence.
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
Pretty much, but I think not totally. But we’ve gone far enough afield already. I’ll note this as a possible topic for a future discussion post.