But at least we finally noticed we’re robots, and we can use the skills of rationality to hop off our habit treadmills and pursue our values instead.
How do you determine whether you’ve really hopped off the treadmill, vs. using higher-order desires as a sophisticated long-term strategy to spread your genes and memes? (Is this covered in the book?)
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you, you can not exist without the information describing you existing (however hidden or unavailable it may be to any particular mind) as well!
Freedom in any reasonable sense is the ability to make the future universe will end up in states that you find desirable. Altering the fitness landscape or letting it stay just as it is are both valid courses of action to this goal, though the latter is very unlikely to be the wise choice for us. Hacking your mind to fool your memes to help spread your genes, or vice versa are also merely a tactic in this goal. Replacing your genes and memes with ones that you are supremley confident will do the job of making the future universe as you’d like, or changing the envrionment they express themselves in seem to be valid as well.
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you [...]
Well, probably not in an information-theory sense. Genes and memes are part of who you are, but there’s a whole buch of other stuff that wasn’t inherited from anywhere and was instead learned from the environment. It is likely to consist of more information than the genes and memes combined.
Well, probably not in an information-theory sense. Genes and memes are part of who you are, but there’s a whole buch of other stuff that wasn’t inherited from anywhere and was instead learned from the environment. It is likely to consist of more information than the genes and memes combined.
Anything you learned from the envrionment and can be transmitted to another brain is a meme, though not necessarily a very successful one. Meme’s seem to be more or less used interchangeably with ideas, which isn’t right obviously since there are things we learn that we can’t transmit with currently available tools. If a day comes when I can directly upload and share the exact smell of a loved one or the muscle memory of me playing basketball for half an hour those things too will become memes, or some other kind of replicator if one wants to quibble about definitions.
But I’m using learned from the envrionment in too narrow a meaning, I think you are using learn here as any difference of behaviour or function that is the result of your interaction with your envrionment. If a slight heavy metal contamination in my childhood caused me to grow a bit less neurotypical or less likley to receive a religious experience or fall in love or anything at all, if someone was trying to upload my brain, that would clearly be something that would need be simulated! It constitutes information about me, even if it isn’t something we would in the everyday sense of the word call “learning”.
Perhaps I’m spending too much of my intellectual life in RH’s scenario of a future of emulated minds competing with each other at the Malthusian margin, where my mind to be perfectly simulated on another medium, any of the states of my body or the rules that govern how these states transition to other states is a piece of information that can be shared and recombined with others. And you would necessarily find some propagating more while others not at all in essence replicator dynamics would I think being operating on a totality of what I am (I wish to emphasise I am making several implicit assumptions about the simulated envrionment here and these may not necessarily hold).
But my primary point was, you can’t really make a you without including lots of the information encoded in your genes or memes, even if this isn’t the totality, or as you point out the majority of information needed to build a you. Arguably if you change their medium, looking at them just as replicators one could argue that they indeed did survive the transition and you are still even in your uploaded and heavily modified or in your “rational” unbiased form the lumbering survival machine of the subset of them that survived the latest selection challenge in their long long history.
Anything you learned from the envrionment and can be transmitted to another brain is a meme, though not necessarily a very successful one.
Memes are what you get culturally. There’s a big mountain of human experience that is not culturally transmitted—because it is learned anew in each generation. When you learn how to tie a new knot, maybe 10% of the skill is culturally transmitted, and 90% is muscle movement information discovered by trial and error on the spot while figuring out how to get to the goal.
Meme’s seem to be more or less used interchangeably with ideas, which isn’t right obviously since there are things we learn that we can’t transmit with currently available tools.
Indeed: “A meme is not equivalent to an idea. It’s not an idea, it’s not equivalent to anything else, really.”—Sue Blackmore
If a day comes when I can directly upload and share the exact smell of a loved one or the muscle memory of me playing basketball for half an hour those things too will become memes, or some other kind of replicator if one wants to quibble about definitions.
Yes, when we can upload our minds, things like knowledge of how balls bounce will be capable of being transmitted memetically—rather than being learned anew in each generation, which is what happpens today.
However, that day has not yet come.
But my primary point was, you can’t really make a you without including lots of the information encoded in your genes or memes [...]
Where most of the information that composes a person comes from and what function they “should” optimise seem like rather different topics to me.
A lot of what we acquire from our environment is not information that impacts on what our goals are, but rather is used to build a model of the environment—which we then use to help us pursue our goals.
That’s true, but some of the information does impact what our goals are. We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two. When we rationally reach reflective equilibrium on our goals, I believe, this will continue to be the case.
We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
The biggest influences come from other humans, symbionts, pathogens and memes. Basically most goal directedness comes from other living, goal-directed systems—so genes and memes—though not necessarily your own genes and memes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system. Essentially, the brain sometimes acts as though it wants its own reward signals—and it fulfills those desires by doing things like taking rewarding drugs. The brain was made by genes—but wireheading is not exactly what the genes want.
The next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations).
Many humans delight in seeking out noble sources of value—probably for signalling reasons. They do not like hearing that genes and memes are primarily responsible for what they hold most dear—and the next biggest influences are probably wireheading and mistakes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system.
That’s the sort of thing I had in mind. Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes “want”. Of course if you place a human being in “the ancestral environment” then you will get learned values that serve the “aim of the genes” reasonably well—but not perfectly. In the modern environment, less so. The brain sometimes wants its own reward signals per se, and more often wants certain distal events that have been favored over the learning process.
Having thus discovered certain activities to be meaningful and rewarding, people go on to tell each other about them. This strongly shapes the meme environment.
How noble or ignoble this is, may be in the eyes of the beholder. It doesn’t look so ignoble to me.
Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes “want”. Of course if you place a human being in “the ancestral environment” then you will get learned values that serve the “aim of the genes” reasonably well—but not perfectly. In the modern environment, less so.
The idea of values coming from genes does not say anything about whether those desires are adaptive in the modern environment. Humans desire fat and sugar. Those desires are built in—coded in genes. That they are currently probably maladaptive is a different issue.
Saying that we have desires for chocolate gateau and ice cream that we must have learned from our environment seems like a “less helpful” way of looking at it the situation to me. It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued. If they are to be classified as being “learned values”, they are learned instrumental values.
Humans desire fat and sugar. Those desires are built in—coded in genes.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C. Fat and sugar just happen to be, in the ancestral environment, means to these ends. Or perhaps humans simply desire survival and reproduction. I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
Humans desire fat and sugar. Those desires are built in—coded in genes.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C—probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don’t have an instinctive craving for it—perhaps because their diet is normally saturated with it.
Or perhaps humans simply desire survival and reproduction.
Sure—e.g. the maternal instinct.
I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
So: those are not really different interpretations of the same facts, but statements covering several different desires—so we don’t have to choose between them.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don’t have to choose between statements of which desires are “coded in genes”, but if we affirm too many of them we’ll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat? “Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat?
There’s little difference—since the way the genes bring about the consumprtion is via desires. FWIW, I didn’t just say “fat”, I said “fat and sugar”—and they were examples of desires—not an exhaustive list.
“Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Genes build our desires, though—in much the same way that they build our hearts and legs.
They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
Genes built our desires, but their “purposes” in doing so are not identical to those desires.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
Maybe—depending on which parts of yourself you most identify with.
There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Lately, though, the rate of evolution of the memes may be leaving the genes in the dust.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us—and from memes attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
Basically most goal directedness comes from living, goal-directed systems—so genes and memes—though not necessarily your genes and memes—also those of associates and pathogens. There are some simple non-living goal-directed systems out there—but none of them have access to technology that allows them to influence our values.
If you think there are other important sources of human values—well, it isn’t terribly clear why you would think that.
Many humans delight in seeking out noble sources of value, for signalling reasons. They can’t stand to hear that genes and memes are primarily responsible for what they hold most dear—even though that’s the actual situation. This seems to be one source of “memetics resistance”—people just can’t bear to hear this story about their own values.
Alas, the next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations). I note that these do not represent particularly noble influences either.
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you
They are two important parts but there is a whole heap of important information stored in the brain that isn’t ‘memes’. Sentiments, desires, weightings, skills, habits, aversions. They just don’t fit in under ‘memes’ - I mean whole parts of the brain don’t even store memes at all.
How do you determine whether you’ve really hopped off the treadmill, vs. using higher-order desires as a sophisticated long-term strategy to spread your genes and memes? (Is this covered in the book?)
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you, you can not exist without the information describing you existing (however hidden or unavailable it may be to any particular mind) as well!
Freedom in any reasonable sense is the ability to make the future universe will end up in states that you find desirable. Altering the fitness landscape or letting it stay just as it is are both valid courses of action to this goal, though the latter is very unlikely to be the wise choice for us. Hacking your mind to fool your memes to help spread your genes, or vice versa are also merely a tactic in this goal. Replacing your genes and memes with ones that you are supremley confident will do the job of making the future universe as you’d like, or changing the envrionment they express themselves in seem to be valid as well.
Well, probably not in an information-theory sense. Genes and memes are part of who you are, but there’s a whole buch of other stuff that wasn’t inherited from anywhere and was instead learned from the environment. It is likely to consist of more information than the genes and memes combined.
Anything you learned from the envrionment and can be transmitted to another brain is a meme, though not necessarily a very successful one. Meme’s seem to be more or less used interchangeably with ideas, which isn’t right obviously since there are things we learn that we can’t transmit with currently available tools. If a day comes when I can directly upload and share the exact smell of a loved one or the muscle memory of me playing basketball for half an hour those things too will become memes, or some other kind of replicator if one wants to quibble about definitions.
But I’m using learned from the envrionment in too narrow a meaning, I think you are using learn here as any difference of behaviour or function that is the result of your interaction with your envrionment. If a slight heavy metal contamination in my childhood caused me to grow a bit less neurotypical or less likley to receive a religious experience or fall in love or anything at all, if someone was trying to upload my brain, that would clearly be something that would need be simulated! It constitutes information about me, even if it isn’t something we would in the everyday sense of the word call “learning”.
Perhaps I’m spending too much of my intellectual life in RH’s scenario of a future of emulated minds competing with each other at the Malthusian margin, where my mind to be perfectly simulated on another medium, any of the states of my body or the rules that govern how these states transition to other states is a piece of information that can be shared and recombined with others. And you would necessarily find some propagating more while others not at all in essence replicator dynamics would I think being operating on a totality of what I am (I wish to emphasise I am making several implicit assumptions about the simulated envrionment here and these may not necessarily hold).
But my primary point was, you can’t really make a you without including lots of the information encoded in your genes or memes, even if this isn’t the totality, or as you point out the majority of information needed to build a you. Arguably if you change their medium, looking at them just as replicators one could argue that they indeed did survive the transition and you are still even in your uploaded and heavily modified or in your “rational” unbiased form the lumbering survival machine of the subset of them that survived the latest selection challenge in their long long history.
Memes are what you get culturally. There’s a big mountain of human experience that is not culturally transmitted—because it is learned anew in each generation. When you learn how to tie a new knot, maybe 10% of the skill is culturally transmitted, and 90% is muscle movement information discovered by trial and error on the spot while figuring out how to get to the goal.
Indeed: “A meme is not equivalent to an idea. It’s not an idea, it’s not equivalent to anything else, really.”—Sue Blackmore
Yes, when we can upload our minds, things like knowledge of how balls bounce will be capable of being transmitted memetically—rather than being learned anew in each generation, which is what happpens today.
However, that day has not yet come.
Sure, granted.
I’m with you there, but I’m at a loss as to how you can reconcile this with your earlier post.
Where most of the information that composes a person comes from and what function they “should” optimise seem like rather different topics to me.
A lot of what we acquire from our environment is not information that impacts on what our goals are, but rather is used to build a model of the environment—which we then use to help us pursue our goals.
That’s true, but some of the information does impact what our goals are. We learn “values” from experience, not just “facts”. (I’m putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two. When we rationally reach reflective equilibrium on our goals, I believe, this will continue to be the case.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
The biggest influences come from other humans, symbionts, pathogens and memes. Basically most goal directedness comes from other living, goal-directed systems—so genes and memes—though not necessarily your own genes and memes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system. Essentially, the brain sometimes acts as though it wants its own reward signals—and it fulfills those desires by doing things like taking rewarding drugs. The brain was made by genes—but wireheading is not exactly what the genes want.
The next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations).
Many humans delight in seeking out noble sources of value—probably for signalling reasons. They do not like hearing that genes and memes are primarily responsible for what they hold most dear—and the next biggest influences are probably wireheading and mistakes.
That’s the sort of thing I had in mind. Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes “want”. Of course if you place a human being in “the ancestral environment” then you will get learned values that serve the “aim of the genes” reasonably well—but not perfectly. In the modern environment, less so. The brain sometimes wants its own reward signals per se, and more often wants certain distal events that have been favored over the learning process.
Having thus discovered certain activities to be meaningful and rewarding, people go on to tell each other about them. This strongly shapes the meme environment.
How noble or ignoble this is, may be in the eyes of the beholder. It doesn’t look so ignoble to me.
The idea of values coming from genes does not say anything about whether those desires are adaptive in the modern environment. Humans desire fat and sugar. Those desires are built in—coded in genes. That they are currently probably maladaptive is a different issue.
Saying that we have desires for chocolate gateau and ice cream that we must have learned from our environment seems like a “less helpful” way of looking at it the situation to me. It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued. If they are to be classified as being “learned values”, they are learned instrumental values.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C. Fat and sugar just happen to be, in the ancestral environment, means to these ends. Or perhaps humans simply desire survival and reproduction. I’m doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
“Actually valued” suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone’s ice cream and offering lard and sugar in their stead.
That’s a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C—probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don’t have an instinctive craving for it—perhaps because their diet is normally saturated with it.
Sure—e.g. the maternal instinct.
So: those are not really different interpretations of the same facts, but statements covering several different desires—so we don’t have to choose between them.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don’t have to choose between statements of which desires are “coded in genes”, but if we affirm too many of them we’ll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories—rather than saying that the genes program a desire for fat? “Desire” is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar—so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
There’s little difference—since the way the genes bring about the consumprtion is via desires. FWIW, I didn’t just say “fat”, I said “fat and sugar”—and they were examples of desires—not an exhaustive list.
Genes build our desires, though—in much the same way that they build our hearts and legs.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn’t to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their “purposes” in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans’ purposes are our desires.
By the way, I apologize if it sounded like I’m trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
So: my position is that it is fine to talk like that—provided one makes the distinction between proximate and ultimate values. There’s a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, “the taste of ice cream” is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar—but I don’t think there’s much of a case for putting “the taste of ice cream” in there.
I think I already acknowledged that distinction—with my example of “taking rewarding drugs” being something that the brain wants, but the genes do not.
Maybe—depending on which parts of yourself you most identify with.
Interesting. I’d appreciate references or links. To me, the interesting and still open question is how these “ultimate” values relate to the outcome of rational reflection and experimentation by the individual.
I just mean the cybernetic agent-environment framework with a reward/utility signal. For example, see page 1 of Hibbard’s recent paper, page 5 of UNIVERSAL ALGORITHMIC INTELLIGENCE A mathematical top!down approach, or page 39 of Machine Super Intelligence.
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change—but such changes can happen.
Maybe existing victims of memetic hijacking could use “reflection and experimentation” to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
In some cases. But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene’s point of view—in case anyone still cares about that—is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
Pretty much, but I think not totally. But we’ve gone far enough afield already. I’ll note this as a possible topic for a future discussion post.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us—and from memes attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data—rather than attempting to affect our values. However, sometimes they attempt to “hijack our brains”—and redirect our values towards their own ends, or those of their makers.
Basically most goal directedness comes from living, goal-directed systems—so genes and memes—though not necessarily your genes and memes—also those of associates and pathogens. There are some simple non-living goal-directed systems out there—but none of them have access to technology that allows them to influence our values.
If you think there are other important sources of human values—well, it isn’t terribly clear why you would think that.
Many humans delight in seeking out noble sources of value, for signalling reasons. They can’t stand to hear that genes and memes are primarily responsible for what they hold most dear—even though that’s the actual situation. This seems to be one source of “memetics resistance”—people just can’t bear to hear this story about their own values.
Alas, the next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations). I note that these do not represent particularly noble influences either.
They are two important parts but there is a whole heap of important information stored in the brain that isn’t ‘memes’. Sentiments, desires, weightings, skills, habits, aversions. They just don’t fit in under ‘memes’ - I mean whole parts of the brain don’t even store memes at all.