Does your utility function treat “a life saved by Perplexed” differently from just “a life”? I could understand an egoist who does not terminally value other lives at all (as opposed to instrumentally valuing saving lives as a way to obtain positive emotions or other benefits for oneself), but a utility function that treats “a life saved by me” differently from just “a life” seems counterintuitive. If the utility of a life saved by Perplexed not different from the utility of another life, then unless your utility function just happens to have a sharp bend at the current world population level, the utility of two saved lives can’t be much less than twice the utility of one saved life. (See Eliezer’s version of this argument, and more along this vein, here.)
Does your utility function treat “a life saved by Perplexed” differently from just “a life”?
I’m torn between responding with “Good question!” versus “What difference does it make?”. Since I can’t decide, I’ll make both responses.
Good question! You are correct in surmising that the root justification for much of the value that I attach to other lives is essentially instrumental (via channels of reciprocity). But not all of the justification. Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare. And then there are cases where kinship and reciprocity become connected in serial chains. So the answer is that I discount based on ‘remoteness’ where remoteness is a distance metric reflecting both genetic and social-interactive inverse connectedness.
What difference does it make? This is my utility function we are talking about, and it is only operational in deciding my own actions. So, even if my utility function attached huge value to lives saved by other people, it is not clear how this would change my behavior. The question seems to be whether people ought to have multiple utility functions—one for directing their own rational choices; the others for some other purpose.
I am currently reading Binmore’s two-volume opus Game Theory and the Social Contract. I strongly recommend it to everyone here who is interested in decision theory and ethics. Although Binmore doesn’t put it in these terms, his system does involve two different sets of values, which are used in two different ways. One is the set of values used in the Game of Life—a set of values which may be as egoistic as the agent wishes (or as altruistic). However, although the agent is conceptually free in the Game of Life, as a practical matter, he is coerced by everyone else to adhere to a Social Contract. Due to this coercion, he mostly behaves morally.
But how does the Social Contract arise? In Binmore’s normative fiction, it arises by negotiated consensus of all agents. The negotiation takes place in a Rawlsian Original Position under a Veil of Ignorance. Since the agent-while-negotiating has different self-knowledge than does the agent-while-living, he manifests different values in the two situations—particularly with regard to utilities which accrue indexically. So, according to Binmore, even an agent who is inherently egoistic in the Game of Life will be egalitarian in the Game of Morals where the Social Contract is negotiated. Different values for a different purpose.
That is the concise summary of the ethical system that Binmore is constructing in the two volumes. But he does a marvelously thorough job of ground-clearing—addressing mistakes made by Kant, Rawls, Nozick, Parfit, and others regarding the Prisoner’s Dilemma, Newcomb’s ‘paradox’, whether it is rational to vote (probably wasted), etc. And in the course of doing so, he pretty thoroughly demolishes what I understand to be the orthodox position on these topics here at Less Wrong.
Thanks for pointing me to Binmore’s work. It does sound very interesting.
Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare.
This is tangential to your point, but what would you say to a utilitarian who says:
“Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes.”
And in the course of doing so, he pretty thoroughly demolishes what I understand to be the orthodox position on these topics here at Less Wrong.
By “orthodox position” are you referring to TDT-related ideas? I’ve made the point several times that I doubt they apply to humans. (I don’t vote myself, actually.) I don’t see how Binmore could have “demolished” those ideas as they relate to AIs since he couldn’t have learned about them when he wrote his books.
what would you say to a utilitarian who say: “Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes.”
There are two separate issues here. I assume that by “linearly” you are referring to the subject that started this conversation: my claim that utilities “are not additive”, an idea also expressed as “diminishing returns”, or diminishing marginal utility of additional people. I probably would not dispute the memetic evolution claim if it focused on “linearity”.
The second issue is a kind of universality—all people valued equally regardless of kinship or close connectedness in a network of reciprocity. I would probably express skepticism at this claim. I would probe the claim to determine whether the selection operates at the level of the meme, the individual, or the society. And then I would ask how that meme contributes to its own propagation at that level.
By “orthodox position” are you referring to TDT-related ideas?
Mostly, I am referring to views expressed by EY in the sequences and frequently echoed by LW regulars in comments. Some of those ideas were apparently repeated in the TDT writeup (though I may be wrong about that—the write-up was pretty incoherent.)
I would probe the claim to determine whether the selection operates at the level of the meme, the individual, or the society.
I’m guessing mostly at the meme level.
And then I would ask how that meme contributes to its own propagation at that level.
It seems pretty obvious, doesn’t it? Utilitarianism makes a carrier believe that they should act to maximize social welfare and that more people believing utilitarianism would help toward that goal, so carriers think they should try to propagate the meme. Also, many egoists may believe that utilitarians would be more willing to contribute to the production of public goods, which they can free ride upon, so they would tend to not argue publicly against utilitarianism, which further contributes to its propagation.
Your just-so story is more complicated than you seem to think. It involves an equilibrium of at least two memes: an evangelical utilitarianism which damages the host but propagates the meme, plus a cryptic egoism which presumably benefits the host but can’t successfully propagate (it repeatedly arises by spontaneous generation, presumably).
I could critique your story on grounds of plausibility (which strategy do crypto-egoists suggest to their own children?) but instead I will ask why someone infected by the evangelical utilitarianism meme would argue as you suggested in the great-grandparent:
“Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes.”
Isn’t it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?
Isn’t it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?
What does “subverted” mean in this context? For example I devote a lot of resources into thinking about philosophical problems which does not seem to contribute to my genetic fitness. Have I been “subverted” by a selfish meme (i.e., the one that says “the unexamined life is not worth living”)? If so, I don’t feel any urge to try to self-modify away from this. Couldn’t a utilitarian feel the same?
I devote a lot of resources into thinking about philosophical problems which does not seem to contribute to my genetic fitness. Have I been “subverted” by a selfish meme (i.e., the one that says “the unexamined life is not worth living”)?
Possibly. It depends on why you do that. The other main hypotheses are that your genetic program may just be manfunctioning in an unfamiliar environment, or that the philosophical problems do—in fact—have some chance of turning out to be adaptive.
If so, I don’t feel any urge to try to self-modify away from this.
Right. So: that could be a result of the strategy of the meme to evade your memetic immune system—or the result of reduced memetic immunity as a result of immune system attacks by other memes you have previously been exposed to.
Any meme that makes a human more meme-friendly benefits itself—as well as all the other memes in the ideosphere. Consequently it tends to becomes popular—since every other meme wants to be linked to it.
A utilitarian might well be indifferent to the self-serving nature of the the meme. But, as I recall, you brought up the question in response to my suggestion that my own (genetic) instincts derive a kind of nobility from their origin in the biological process of natural selection for organism fitness. Would our hypothetical utilitarian be proud of the origin of his meme in the cultural process of selection for meme self-promotion?
I don’t think you mentioned “nobility” before. What you wrote was just:
Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare.
which seemed to me to be a kind of claim that a utilitarian could make with equal credibility. If you’re now saying that you feel noble and proud that your values come from biological instead of cultural evolution… well I’ve never seen that expressed anywhere else before, so I’m going to guess that most people do not have that kind of feeling.
...seemed to me to be a kind of claim that a utilitarian could make with equal credibility.
Well, he could credibly make that claim if he could credibly assert that the ancestral environment was remarkably favorable for group selection.
… you’re now saying that you feel noble and proud that your values come from biological instead of cultural evolution...
What I actually said was “my own (genetic) instincts derive a kind of nobility from their origin …”. The value itself claims a noble genealogy, not a noble essence. If I am proud on its behalf, it is because that instinct has been helping to keep my ancestral line alive for many generations. I could say something similar for a meme which became common by way of selection at the individual or societal level. But what do I say about a selfish meme. That I am not the only person that it fooled and exploited? I’m going to guess that most people do have that kind of feeling.
I think you misinterpreted the context. I endorsed kin selection, together with discounting the welfare of non-kin. Someone (not me!) wishing to be a straight utilitarian and wishing to treat kin and non-kin equally needs to endorse group selection in order to give their ethical intuitions a basis in evolutionary psychology. Because it is clear that humans engage in kin recognition.
Now I see how you are reading the “kind of claim that a utilitarian could make” bit.
As you previously observed, the actual answer to this involves cultural evolution—not group selection.
The “evolutionary psychology” explanation is that humans developed sophisticated culture which was—on average—beneficial, but which allowed all kinds of deleterious memes in with the beneficial ones.
A utilitarian could claim:
Evolution has produced in me the tendency to value the welfare of non-kin at a significant fraction of the value of my own personal welfare.
...on the grounds that their evolution involved gene-meme coevolution—and that inevitably involves a certain amount of memetic hijacking by deleterious memes—such as utilitarianism.
Isn’t it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?
I struggle to understand what is going on there as well. I think some of these folk have simultaneously embraced a kind of “genes=bad, memes=good” memeplex. This says something like: nature red in tooth and claw is evil, while memes turn brutish cavemen into civilized humans. The memes are the future, and they are good. That is a meme other memes want to associate with. Obviously if you buy into such an idea, then that promotes the interests of all of your memes, often at the expense of your genes.
Utilitarianism makes a carrier believe that they should act to maximize social welfare and that more people believing utilitarianism would help toward that goal, so carriers think they should try to propagate the meme. Also, many egoists may believe that utilitarians would be more willing to contribute to the production of public goods, which they can free ride upon, so they would tend to not argue publicly against utilitarianism, which further contributes to its propagation.
My hypothesis seems a teensy bit different:
Utilitarianism is a means of signalling what an unselfish goody-two-shoes you are—and many like to send that signal, even if they don’t walk the walk. Utilitarianism seems to have hooked some moral philosophers—whoso job descripiton required them to send that message.
Also, utilitarianism is a tool used by charities and causes to manipulate people into giving away their worldly goods. So: there are some financial forces that lead to its marketing and promotion.
I am sceptical about your story about egoists regarding utilitarians positively. Give me an egoist any day. The utilitarian is probably lying to others and to themselves, battling their nature and experiencing inner conflict. Their brain has been hijacked by a sterilising meme. At any moment, I don’t know if thier utilitarian side will be dominant, or whether their natural programming will be. That makes it hard for me to model and deal with them.
This is tangential to your point, but what would you say to a utilitarian who says:
“Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes.”
Since such a belief is pretty potentially disasterous for the genes—so: how come it got through the memetic immune system? Perhaps this is a case of meme evolution outstripping gene evolution, resulting in virulent memes that can memetically hijack people’s brains. However, many seem to have working immune systems—and can resist this meme. Do the utilitarians have weakened memetic immunity? What can have led to that? Were they not taught about the risks of memetic hijacking in school—or by their families?
Does your utility function treat “a life saved by Perplexed” differently from just “a life”? I could understand an egoist who does not terminally value other lives at all (as opposed to instrumentally valuing saving lives as a way to obtain positive emotions or other benefits for oneself), but a utility function that treats “a life saved by me” differently from just “a life” seems counterintuitive.
Surely we expect natural selection to build organisms that value the lives of their relatives. If you save a life, it is surely more likely to be that of a relative than a randomly-selected life—so organisms valuing “local” lives seems only natural to me.
Does your utility function treat “a life saved by Perplexed” differently from just “a life”? I could understand an egoist who does not terminally value other lives at all (as opposed to instrumentally valuing saving lives as a way to obtain positive emotions or other benefits for oneself), but a utility function that treats “a life saved by me” differently from just “a life” seems counterintuitive. If the utility of a life saved by Perplexed not different from the utility of another life, then unless your utility function just happens to have a sharp bend at the current world population level, the utility of two saved lives can’t be much less than twice the utility of one saved life. (See Eliezer’s version of this argument, and more along this vein, here.)
I’m torn between responding with “Good question!” versus “What difference does it make?”. Since I can’t decide, I’ll make both responses.
Good question! You are correct in surmising that the root justification for much of the value that I attach to other lives is essentially instrumental (via channels of reciprocity). But not all of the justification. Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare. And then there are cases where kinship and reciprocity become connected in serial chains. So the answer is that I discount based on ‘remoteness’ where remoteness is a distance metric reflecting both genetic and social-interactive inverse connectedness.
What difference does it make? This is my utility function we are talking about, and it is only operational in deciding my own actions. So, even if my utility function attached huge value to lives saved by other people, it is not clear how this would change my behavior. The question seems to be whether people ought to have multiple utility functions—one for directing their own rational choices; the others for some other purpose.
I am currently reading Binmore’s two-volume opus Game Theory and the Social Contract. I strongly recommend it to everyone here who is interested in decision theory and ethics. Although Binmore doesn’t put it in these terms, his system does involve two different sets of values, which are used in two different ways. One is the set of values used in the Game of Life—a set of values which may be as egoistic as the agent wishes (or as altruistic). However, although the agent is conceptually free in the Game of Life, as a practical matter, he is coerced by everyone else to adhere to a Social Contract. Due to this coercion, he mostly behaves morally.
But how does the Social Contract arise? In Binmore’s normative fiction, it arises by negotiated consensus of all agents. The negotiation takes place in a Rawlsian Original Position under a Veil of Ignorance. Since the agent-while-negotiating has different self-knowledge than does the agent-while-living, he manifests different values in the two situations—particularly with regard to utilities which accrue indexically. So, according to Binmore, even an agent who is inherently egoistic in the Game of Life will be egalitarian in the Game of Morals where the Social Contract is negotiated. Different values for a different purpose.
That is the concise summary of the ethical system that Binmore is constructing in the two volumes. But he does a marvelously thorough job of ground-clearing—addressing mistakes made by Kant, Rawls, Nozick, Parfit, and others regarding the Prisoner’s Dilemma, Newcomb’s ‘paradox’, whether it is rational to vote (probably wasted), etc. And in the course of doing so, he pretty thoroughly demolishes what I understand to be the orthodox position on these topics here at Less Wrong.
Really, really recommended.
Thanks for pointing me to Binmore’s work. It does sound very interesting.
This is tangential to your point, but what would you say to a utilitarian who says:
“Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes.”
By “orthodox position” are you referring to TDT-related ideas? I’ve made the point several times that I doubt they apply to humans. (I don’t vote myself, actually.) I don’t see how Binmore could have “demolished” those ideas as they relate to AIs since he couldn’t have learned about them when he wrote his books.
There are two separate issues here. I assume that by “linearly” you are referring to the subject that started this conversation: my claim that utilities “are not additive”, an idea also expressed as “diminishing returns”, or diminishing marginal utility of additional people. I probably would not dispute the memetic evolution claim if it focused on “linearity”.
The second issue is a kind of universality—all people valued equally regardless of kinship or close connectedness in a network of reciprocity. I would probably express skepticism at this claim. I would probe the claim to determine whether the selection operates at the level of the meme, the individual, or the society. And then I would ask how that meme contributes to its own propagation at that level.
Mostly, I am referring to views expressed by EY in the sequences and frequently echoed by LW regulars in comments. Some of those ideas were apparently repeated in the TDT writeup (though I may be wrong about that—the write-up was pretty incoherent.)
I’m guessing mostly at the meme level.
It seems pretty obvious, doesn’t it? Utilitarianism makes a carrier believe that they should act to maximize social welfare and that more people believing utilitarianism would help toward that goal, so carriers think they should try to propagate the meme. Also, many egoists may believe that utilitarians would be more willing to contribute to the production of public goods, which they can free ride upon, so they would tend to not argue publicly against utilitarianism, which further contributes to its propagation.
Your just-so story is more complicated than you seem to think. It involves an equilibrium of at least two memes: an evangelical utilitarianism which damages the host but propagates the meme, plus a cryptic egoism which presumably benefits the host but can’t successfully propagate (it repeatedly arises by spontaneous generation, presumably).
I could critique your story on grounds of plausibility (which strategy do crypto-egoists suggest to their own children?) but instead I will ask why someone infected by the evangelical utilitarianism meme would argue as you suggested in the great-grandparent:
Isn’t it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?
What does “subverted” mean in this context? For example I devote a lot of resources into thinking about philosophical problems which does not seem to contribute to my genetic fitness. Have I been “subverted” by a selfish meme (i.e., the one that says “the unexamined life is not worth living”)? If so, I don’t feel any urge to try to self-modify away from this. Couldn’t a utilitarian feel the same?
Possibly. It depends on why you do that. The other main hypotheses are that your genetic program may just be manfunctioning in an unfamiliar environment, or that the philosophical problems do—in fact—have some chance of turning out to be adaptive.
Right. So: that could be a result of the strategy of the meme to evade your memetic immune system—or the result of reduced memetic immunity as a result of immune system attacks by other memes you have previously been exposed to.
Any meme that makes a human more meme-friendly benefits itself—as well as all the other memes in the ideosphere. Consequently it tends to becomes popular—since every other meme wants to be linked to it.
A utilitarian might well be indifferent to the self-serving nature of the the meme. But, as I recall, you brought up the question in response to my suggestion that my own (genetic) instincts derive a kind of nobility from their origin in the biological process of natural selection for organism fitness. Would our hypothetical utilitarian be proud of the origin of his meme in the cultural process of selection for meme self-promotion?
I don’t think you mentioned “nobility” before. What you wrote was just:
which seemed to me to be a kind of claim that a utilitarian could make with equal credibility. If you’re now saying that you feel noble and proud that your values come from biological instead of cultural evolution… well I’ve never seen that expressed anywhere else before, so I’m going to guess that most people do not have that kind of feeling.
Well, he could credibly make that claim if he could credibly assert that the ancestral environment was remarkably favorable for group selection.
What I actually said was “my own (genetic) instincts derive a kind of nobility from their origin …”. The value itself claims a noble genealogy, not a noble essence. If I am proud on its behalf, it is because that instinct has been helping to keep my ancestral line alive for many generations. I could say something similar for a meme which became common by way of selection at the individual or societal level. But what do I say about a selfish meme. That I am not the only person that it fooled and exploited? I’m going to guess that most people do have that kind of feeling.
Not group, surely: kin. He quoted you as saying: “welfare (fitness) of kin”.
I think you misinterpreted the context. I endorsed kin selection, together with discounting the welfare of non-kin. Someone (not me!) wishing to be a straight utilitarian and wishing to treat kin and non-kin equally needs to endorse group selection in order to give their ethical intuitions a basis in evolutionary psychology. Because it is clear that humans engage in kin recognition.
Now I see how you are reading the “kind of claim that a utilitarian could make” bit.
As you previously observed, the actual answer to this involves cultural evolution—not group selection.
The “evolutionary psychology” explanation is that humans developed sophisticated culture which was—on average—beneficial, but which allowed all kinds of deleterious memes in with the beneficial ones.
A utilitarian could claim:
...on the grounds that their evolution involved gene-meme coevolution—and that inevitably involves a certain amount of memetic hijacking by deleterious memes—such as utilitarianism.
I struggle to understand what is going on there as well. I think some of these folk have simultaneously embraced a kind of “genes=bad, memes=good” memeplex. This says something like: nature red in tooth and claw is evil, while memes turn brutish cavemen into civilized humans. The memes are the future, and they are good. That is a meme other memes want to associate with. Obviously if you buy into such an idea, then that promotes the interests of all of your memes, often at the expense of your genes.
The longer we argue, and the more we ponder, the more we empower the memes. I don’t have a problem with that.
My hypothesis seems a teensy bit different:
Utilitarianism is a means of signalling what an unselfish goody-two-shoes you are—and many like to send that signal, even if they don’t walk the walk. Utilitarianism seems to have hooked some moral philosophers—whoso job descripiton required them to send that message.
Also, utilitarianism is a tool used by charities and causes to manipulate people into giving away their worldly goods. So: there are some financial forces that lead to its marketing and promotion.
I am sceptical about your story about egoists regarding utilitarians positively. Give me an egoist any day. The utilitarian is probably lying to others and to themselves, battling their nature and experiencing inner conflict. Their brain has been hijacked by a sterilising meme. At any moment, I don’t know if thier utilitarian side will be dominant, or whether their natural programming will be. That makes it hard for me to model and deal with them.
You can always trust a dishonest man, said the famous philosopher. But you couldn’t trust him, after all; he wasn’t as dishonest as he claimed.
Since such a belief is pretty potentially disasterous for the genes—so: how come it got through the memetic immune system? Perhaps this is a case of meme evolution outstripping gene evolution, resulting in virulent memes that can memetically hijack people’s brains. However, many seem to have working immune systems—and can resist this meme. Do the utilitarians have weakened memetic immunity? What can have led to that? Were they not taught about the risks of memetic hijacking in school—or by their families?
Surely we expect natural selection to build organisms that value the lives of their relatives. If you save a life, it is surely more likely to be that of a relative than a randomly-selected life—so organisms valuing “local” lives seems only natural to me.