Where most of the information that composes a person comes from and what function they “should” optimise seem like rather different topics to me.
A lot of what we acquire from our environment is not information that impacts on what our goals are, but rather is used to build a model of the environment—which we then use to help us pursue our goals.
That’s true, but some of the information does impact what our goals are. We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two. When we rationally reach reflective equilibrium on our goals, I believe, this will continue to be the case.
We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
The biggest influences come from other humans, symbionts, pathogens and memes. Basically most goal directedness comes from other living, goal-directed systems—so genes and memes—though not necessarily your own genes and memes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system. Essentially, the brain sometimes acts as though it wants its own reward signals—and it fulfills those desires by doing things like taking rewarding drugs. The brain was made by genes—but wireheading is not exactly what the genes want.
The next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations).
Many humans delight in seeking out noble sources of value—probably for signalling reasons. They do not like hearing that genes and memes are primarily responsible for what they hold most dear—and the next biggest influences are probably wireheading and mistakes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system.
That’s the sort of thing I had in mind. Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes “want”. Of course if you place a human being in “the ancestral environment” then you will get learned values that serve the “aim of the genes” reasonably well—but not perfectly. In the modern environment, less so. The brain sometimes wants its own reward signals per se, and more often wants certain distal events that have been favored over the learning process.
Having thus discovered certain activities to be meaningful and rewarding, people go on to tell each other about them. This strongly shapes the meme environment.
How noble or ignoble this is, may be in the eyes of the beholder. It doesn’t look so ignoble to me.
Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes “want”. Of course if you place a human being in “the ancestral environment” then you will get learned values that serve the “aim of the genes” reasonably well—but not perfectly. In the modern environment, less so.
The idea of values coming from genes does not say anything about whether those desires are adaptive in the modern environment. Humans desire fat and sugar. Those desires are built in—coded in genes. That they are currently probably maladaptive is a different issue.
Saying that we have desires for chocolate gateau and ice cream that we must have learned from our environment seems like a “less helpful” way of looking at it the situation to me. It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued. If they are to be classified as being “learned values”, they are learned instrumental values.
Humans desire fat and sugar. Those desires are built in—coded in genes.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C. Fat and sugar just happen to be, in the ancestral environment, means to these ends. Or perhaps humans simply desire survival and reproduction. I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
Humans desire fat and sugar. Those desires are built in—coded in genes.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C—probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don’t have an instinctive craving for it—perhaps because their diet is normally saturated with it.
Or perhaps humans simply desire survival and reproduction.
Sure—e.g. the maternal instinct.
I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
So: those are not really different interpretations of the same facts, but statements covering several different desires—so we don’t have to choose between them.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don’t have to choose between statements of which desires are “coded in genes”, but if we affirm too many of them we’ll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat? “Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat?
There’s little difference—since the way the genes bring about the consumprtion is via desires. FWIW, I didn’t just say “fat”, I said “fat and sugar”—and they were examples of desires—not an exhaustive list.
“Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Genes build our desires, though—in much the same way that they build our hearts and legs.
They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
Genes built our desires, but their “purposes” in doing so are not identical to those desires.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
Maybe—depending on which parts of yourself you most identify with.
There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Lately, though, the rate of evolution of the memes may be leaving the genes in the dust.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us—and from memes attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
Basically most goal directedness comes from living, goal-directed systems—so genes and memes—though not necessarily your genes and memes—also those of associates and pathogens. There are some simple non-living goal-directed systems out there—but none of them have access to technology that allows them to influence our values.
If you think there are other important sources of human values—well, it isn’t terribly clear why you would think that.
Many humans delight in seeking out noble sources of value, for signalling reasons. They can’t stand to hear that genes and memes are primarily responsible for what they hold most dear—even though that’s the actual situation. This seems to be one source of “memetics resistance”—people just can’t bear to hear this story about their own values.
Alas, the next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations). I note that these do not represent particularly noble influences either.
Where most of the information that composes a person comes from and what function they “should” optimise seem like rather different topics to me.
A lot of what we acquire from our environment is not information that impacts on what our goals are, but rather is used to build a model of the environment—which we then use to help us pursue our goals.
That’s true, but some of the information does impact what our goals are. We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two. When we rationally reach reflective equilibrium on our goals, I believe, this will continue to be the case.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
The biggest influences come from other humans, symbionts, pathogens and memes. Basically most goal directedness comes from other living, goal-directed systems—so genes and memes—though not necessarily your own genes and memes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system. Essentially, the brain sometimes acts as though it wants its own reward signals—and it fulfills those desires by doing things like taking rewarding drugs. The brain was made by genes—but wireheading is not exactly what the genes want.
The next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations).
Many humans delight in seeking out noble sources of value—probably for signalling reasons. They do not like hearing that genes and memes are primarily responsible for what they hold most dear—and the next biggest influences are probably wireheading and mistakes.
That’s the sort of thing I had in mind. Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes “want”. Of course if you place a human being in “the ancestral environment” then you will get learned values that serve the “aim of the genes” reasonably well—but not perfectly. In the modern environment, less so. The brain sometimes wants its own reward signals per se, and more often wants certain distal events that have been favored over the learning process.
Having thus discovered certain activities to be meaningful and rewarding, people go on to tell each other about them. This strongly shapes the meme environment.
How noble or ignoble this is, may be in the eyes of the beholder. It doesn’t look so ignoble to me.
The idea of values coming from genes does not say anything about whether those desires are adaptive in the modern environment. Humans desire fat and sugar. Those desires are built in—coded in genes. That they are currently probably maladaptive is a different issue.
Saying that we have desires for chocolate gateau and ice cream that we must have learned from our environment seems like a “less helpful” way of looking at it the situation to me. It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued. If they are to be classified as being “learned values”, they are learned instrumental values.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C. Fat and sugar just happen to be, in the ancestral environment, means to these ends. Or perhaps humans simply desire survival and reproduction. I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C—probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don’t have an instinctive craving for it—perhaps because their diet is normally saturated with it.
Sure—e.g. the maternal instinct.
So: those are not really different interpretations of the same facts, but statements covering several different desires—so we don’t have to choose between them.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don’t have to choose between statements of which desires are “coded in genes”, but if we affirm too many of them we’ll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat? “Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
There’s little difference—since the way the genes bring about the consumprtion is via desires. FWIW, I didn’t just say “fat”, I said “fat and sugar”—and they were examples of desires—not an exhaustive list.
Genes build our desires, though—in much the same way that they build our hearts and legs.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Maybe—depending on which parts of yourself you most identify with.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
I just mean the cybernetic agent-environment framework with a reward/utility signal. For example, see page 1 of Hibbard’s recent paper, page 5 of UNIVERSAL ALGORITHMIC INTELLIGENCE A mathematical top!down approach, or page 39 of Machine Super Intelligence.
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
Pretty much, but I think not totally. But we’ve gone far enough afield already. I’ll note this as a possible topic for a future discussion post.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us—and from memes attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
Basically most goal directedness comes from living, goal-directed systems—so genes and memes—though not necessarily your genes and memes—also those of associates and pathogens. There are some simple non-living goal-directed systems out there—but none of them have access to technology that allows them to influence our values.
If you think there are other important sources of human values—well, it isn’t terribly clear why you would think that.
Many humans delight in seeking out noble sources of value, for signalling reasons. They can’t stand to hear that genes and memes are primarily responsible for what they hold most dear—even though that’s the actual situation. This seems to be one source of “memetics resistance”—people just can’t bear to hear this story about their own values.
Alas, the next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations). I note that these do not represent particularly noble influences either.