If I recall correctly, the logic was that in the process of searching the space of optimization options it will necessarily encounter an imperative against suffering or something to that effect, inevitably resulting in modifying its goal system to be more compassionate, the way humanity seems to be evolving.
I see no reason to suspect the space of optimization options contains value imperatives, assuming the AI is guarded against the equivalent of SQL injection attacks.
Humanity seems to be evolving towards compassion because being the causal factors increasing compassion are on average profitable for individual humans with those factors. The easy example of this is stable, strong police forces routinely hanging murderers, instead of those murderers profiting from from their actions. If you don’t have an analogue of the police, then you shouldn’t expect the analogue of the reduction in murders.
(I should remark that I very much like the way this report is focused; I think that trying to discuss causal models explicitly is much better than trying to make surface-level analogies.)
empty space for a meditation seems out of place in a more-or-less formal paper
At the very least, using a page break rather than a bunch of ellipses seems better.
Humanity seems to be evolving towards compassion because being the causal factors increasing compassion are on average profitable for individual humans with those factors.
I was simply paraphrasing David Pearce, it’s not my opinion, so no point arguing with me. That said, your argument seems misdirected in another way: the imperative against suffering applies to people and animals whose welfare is not in any way beneficial and sometimes even detrimental to those exhibiting compassion.
Yeah, but they are losing compassion for other things (unborn babies, gods, etc...). What reason is there to believe there is a net gain in compassion, rather than simply a shift in the things to be compassionate towards?
EDIT: This should have been directed towards Vaniver rather than shminux.
an expanding circle of empathetic concern needn’t reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.
Does your source distinguish between motivations for vegetarianism? It’s plausible that the male:female vegetarianism rates are instead motivated by (e.g.) culture-linked diet concerns—women adopt restricted diets of all types significantly more than men—and that ethically motivated vegetarianism occurs at similar rates, or that self-justifying ethics tend to evolve after the fact.
an expanding circle of empathetic concern needn’t reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.
Right. What I should have said was:
What reason is there to believe that people are compassionate towards more types of things, rather than merely different types of things?
The growth of science has led to a decline in animism. So in one sense, our sphere of concern has narrowed. But within the sphere of sentience, I think Singer and Pinker are broadly correct. Also, utopian technology makes even the weakest forms of benevolence vastly more effective. Consider, say, vaccination. Even if, pessimistically, one doesn’t foresee any net growth in empathetic concern, technology increasingly makes the costs of benevolence trivial.
[Once again, I’m not addressing here the prospect of hypothetical paperclippers—just mind-reading humans with a pain-pleasure (dis)value axis.]
On (indirect) utilitarian grounds, we may make a strong case that enshrining the sanctity of life in law will lead to better consequences than legalising infanticide. So I disagree with Singer here. But I’m not sure Singer’s willingness to defend infanticide as (sometimes) the lesser evil is a counterexample to the broad sweep of the generalisation of the expanding circle. We’re not talking about some Iron Law of Moral Progress.
But I’m not sure Singer’s willingness to defend infanticide as (sometimes) the lesser evil
If I recall correctly Singer’s defense is that it’s better to kill infants than have them grow up with disabilities. The logic here relies on excluding infants and to a certain extent people with disabilities from our circle of compassion.
is a counterexample to the broad sweep of the generalisation of the expanding circle. We’re not talking about some Iron Law of Moral Progress.
You may want to look at gwern’s essay on the subject. By the time you finish taking into account all the counterexamples your generalization looks more like a case of cherry-picking examples.
Eugine, are you doing Peter Singer justice? What motivates Singer’s position isn’t a range of empathetic concern that’s stunted in comparsion to people who favour the universal sanctity of human life. Rather it’s a different conception of the threshold below which a life is not worth living. We find similar debates over the so-called “Logic of the Larder” for factory-farmed non-human animals: http://www.animal-rights-library.com/texts-c/salt02.htm. Actually, one may agree with Singer—both his utilitarian ethics and bleak diagnosis of some human and nonhuman lives—and still argue against his policy prescriptions on indirect utilitarian grounds. But this would take us far afield.
What motivates Singer’s position isn’t a range of empathetic concern that’s stunted in comparsion to people who favour the universal sanctity of human life. Rather it’s a different conception of the threshold below which a life is not worth living.
By this logic most of the people from the past who Singer and Pinker cite as examples of less empathic individuals aren’t less empathic either. But seriously, has Singer made any effort to take into account, or even look at, the preferences of any of the people who he claims have lives that aren’t worth living?
I disagree with Peter Singer here. So I’m not best placed to argue his position. But Singer is acutely sensitive to the potential risks of any notion of lives not worth living. Recall Singer lost three of his grandparents in the Holocaust. Let’s just say it’s not obvious that an incurable victim of, say, infantile Tay–Sachs disease, who is going do die around four years old after a chronic pain-ridden existence, is better off alive. We can’t ask this question to the victim: the nature of the disorder means s/he is not cognitively competent to understand the question.
Either way, the case for the expanding circle doesn’t depend on an alleged growth in empathy per se. If, as I think quite likely, we eventually enlarge our sphere of concern to the well-being of all sentience, this outcome may owe as much to the trait of high-AQ hyper-systematising as any widening or deepening compassion. By way of example, consider the work of Bill Gates in cost-effective investments in global health (vaccinations etc) and indeed in: http://www.thegatesnotes.com/Features/Future-of-Food (“the future of meat is vegan”). Not even his greatest admirers would describe Gates as unusually empathetic. But he is unusually rational—and the growth in secular scientific rationalism looks set to continue.
But Singer is acutely sensitive to the potential risks of any notion of lives not worth living.
I’m not sure what you mean by “sensitive”, it certainly doesn’t stop him from being at the cutting edge pushing in that direction.
Either way, the case for the expanding circle doesn’t depend on an alleged growth in empathy per se. If, as I think quite likely, we eventually enlarge our sphere of concern to the well-being of all sentience, this outcome may owe as much to the trait of high-AQ hyper-systematising as any widening or deepening compassion.
By way of example, consider the work of Bill Gates in cost-effective investments in global health (vaccinations etc) and indeed in: http://www.thegatesnotes.com/Features/Future-of-Food (“the future of meat is vegan”). Not even his greatest admirers would describe Gates as unusually empathetic. But he is unusually rational—and the growth in secular scientific rationalism looks set to continue.
You seem to be confusing expanding the circle of beings we care for and being more efficient in providing that caring.
Cruelty-free in vitro meat can potentially replace the flesh of all sentient beings currently used for food.
Yes, it’s more efficient; it also makes high-tech Jainism less of a pipedream.
If I recall correctly Singer’s defense is that it’s better to kill infants than have them grow up with disabilities. The logic here relies on excluding infants and to a certain extent people with disabilities from our circle of compassion.
As I understand the common arguments for legalizing infanticide, it involves weighting the preferences of the parents and society more—not a complete discounting of the infant’s preferences.
As I understand the common arguments for legalizing infanticide, it involves weighting the preferences of the parents and society more—not a complete discounting of the infant’s preferences.
Try replacing “infanticide” (and “infant’s”) in that sentence with “killing Jews” or “enslaving Blacks”. Would you also argue that it’s not excluding Jews or Blacks from the circle of compassion?
It seems like a silly question. Practically everyone discounts the preferences of the very young. They can’t vote, and below some age, are widely agreed to have practically no human rights, and are generally eligible for death on parental whim.
Well the same applies even more strongly to animals, but the people arguing for the “expanding circle of compassion” idea like to site vegetarianism as an example of this phenomenon.
Well, sure, but adult human females have preferences too, and they are quite significant ones. An “expanding circle of compassion” doesn’t necessarily imply equal weights for everyone.
Well, sure, but adult human females have preferences too, and they are quite significant ones.
So did slave owners.
An “expanding circle of compassion” doesn’t necessarily imply equal weights for everyone.
At the point where A’s inconvenience justifies B’s being killed you’ve effectively generalized the “expanding circle of compassion” idea into meaninglessness.
Well, sure, but adult human females have preferences too, and they are quite significant ones.
So did slave owners.
Sure.
An “expanding circle of compassion” doesn’t necessarily imply equal weights for everyone.
At the point where A’s inconvenience justifies B’s being killed you’ve effectively generalized the “expanding circle of compassion” idea into meaninglessness.
Singer’s obviously right about the “expanding circle”—it’s a real phenomenon. If A is a human and B is a radish, A killing B doesn’t seem too awful. Singer claims newborns are rather like that—in being too young to have much in the way of preferences worthy of respect.
Singer’s obviously right about the “expanding circle”—it’s a real phenomenon.
Um, this is precisely the point of disagreement, and given that your next sentence is about the position that babies have the moral worth of radishes I don’t see how you can assert that with a straight face.
I didn’t know that. I normally take this for granted.
Some conventional cites on the topic are: Singer and Dawkins.
You just steelmanned Singer’s position to claiming that babies have the moral worth of radishes, and it hasn’t occurred to you that he might not be the best person to site for arguing for an expanding moral circle?
I find it really weird that I don’t recall having seen that piece of rhetoric before. (ETA: Argh, dangerously close to politics here. Retracting this comment.)
If I remember correctly, I started thinking along these lines after hearing Robert Garland lecture on ancient Egyptian religion. As a side-note to a discussion about how they had little sympathy for the plight of slaves and those in the lower classes of society (since this was all part of the eternal cosmic order and as it should be), he mentioned that they would likely think that we are the cruel ones, since we don’t even bother to feed and cloth the gods, let alone worship them (and the gods, of course, are even more important than mere humans, making our lack of concern all the more horrible).
It seems beneficial to make sure my understanding of why Pearce’s argument fails matches that of others, even if I don’t need to convince you that it fails.
the imperative against suffering applies to people and animals whose welfare is not in any way beneficial and sometimes even detrimental to those exhibiting compassion.
I interpret imperatives as “you should X,” where the operative word is the “should,” even if the content is the “X.” It is not at all obvious to me why Pearce expects the “should” to be convincing to a paperclipper. That is, I don’t think there is a logical argument from arbitrary premises to adopt a preference for not harming beings that can feel pain, even though the paperclipper may imagine a large number of unconvincing logical arguments whose conclusion is “don’t harm beings that can feel pain if it costless to avoid” on the way to accomplishing its goals.
Perhaps it’s worth distinguishing the Convergence vs Orthogonality theses for:
1) biological minds with a pain-pleasure (dis)value axis.
2) hypothetical paperclippers.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I’m assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference frames.
But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone.
This reads to me as “unless we believe conclusion ~X, a strong case can be made for X,” which makes me suspect that I made a parse error.
that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah
This is a negative statement: “synthetic superintelligences will not have property A, because they did not come from the savanna.” I don’t think negative statements are as convincing as positive statements: “synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A.”
I suspect that a moral “view from here” will be better at accumulating resources than a moral “view from nowhere,” both now and in the future, for reasons I can elaborate on if they aren’t obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker’s “The Better Angels of Our Nature”. Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn’t (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness—and partial correction—of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I’m making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology
I see no reason to suspect the space of optimization options contains value imperatives, assuming the AI is guarded against the equivalent of SQL injection attacks.
Humanity seems to be evolving towards compassion because being the causal factors increasing compassion are on average profitable for individual humans with those factors. The easy example of this is stable, strong police forces routinely hanging murderers, instead of those murderers profiting from from their actions. If you don’t have an analogue of the police, then you shouldn’t expect the analogue of the reduction in murders.
(I should remark that I very much like the way this report is focused; I think that trying to discuss causal models explicitly is much better than trying to make surface-level analogies.)
At the very least, using a page break rather than a bunch of ellipses seems better.
I was simply paraphrasing David Pearce, it’s not my opinion, so no point arguing with me. That said, your argument seems misdirected in another way: the imperative against suffering applies to people and animals whose welfare is not in any way beneficial and sometimes even detrimental to those exhibiting compassion.
Yeah, but they are losing compassion for other things (unborn babies, gods, etc...). What reason is there to believe there is a net gain in compassion, rather than simply a shift in the things to be compassionate towards?
EDIT: This should have been directed towards Vaniver rather than shminux.
an expanding circle of empathetic concern needn’t reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.
Does your source distinguish between motivations for vegetarianism? It’s plausible that the male:female vegetarianism rates are instead motivated by (e.g.) culture-linked diet concerns—women adopt restricted diets of all types significantly more than men—and that ethically motivated vegetarianism occurs at similar rates, or that self-justifying ethics tend to evolve after the fact.
Nornagest, fair point. See too “The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans” : http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010847
Right. What I should have said was:
The growth of science has led to a decline in animism. So in one sense, our sphere of concern has narrowed. But within the sphere of sentience, I think Singer and Pinker are broadly correct. Also, utopian technology makes even the weakest forms of benevolence vastly more effective. Consider, say, vaccination. Even if, pessimistically, one doesn’t foresee any net growth in empathetic concern, technology increasingly makes the costs of benevolence trivial.
[Once again, I’m not addressing here the prospect of hypothetical paperclippers—just mind-reading humans with a pain-pleasure (dis)value axis.]
Would this be the same Singer who argues that there’s nothing wrong with infanticide?
On (indirect) utilitarian grounds, we may make a strong case that enshrining the sanctity of life in law will lead to better consequences than legalising infanticide. So I disagree with Singer here. But I’m not sure Singer’s willingness to defend infanticide as (sometimes) the lesser evil is a counterexample to the broad sweep of the generalisation of the expanding circle. We’re not talking about some Iron Law of Moral Progress.
If I recall correctly Singer’s defense is that it’s better to kill infants than have them grow up with disabilities. The logic here relies on excluding infants and to a certain extent people with disabilities from our circle of compassion.
You may want to look at gwern’s essay on the subject. By the time you finish taking into account all the counterexamples your generalization looks more like a case of cherry-picking examples.
Eugine, are you doing Peter Singer justice? What motivates Singer’s position isn’t a range of empathetic concern that’s stunted in comparsion to people who favour the universal sanctity of human life. Rather it’s a different conception of the threshold below which a life is not worth living. We find similar debates over the so-called “Logic of the Larder” for factory-farmed non-human animals: http://www.animal-rights-library.com/texts-c/salt02.htm. Actually, one may agree with Singer—both his utilitarian ethics and bleak diagnosis of some human and nonhuman lives—and still argue against his policy prescriptions on indirect utilitarian grounds. But this would take us far afield.
By this logic most of the people from the past who Singer and Pinker cite as examples of less empathic individuals aren’t less empathic either. But seriously, has Singer made any effort to take into account, or even look at, the preferences of any of the people who he claims have lives that aren’t worth living?
I disagree with Peter Singer here. So I’m not best placed to argue his position. But Singer is acutely sensitive to the potential risks of any notion of lives not worth living. Recall Singer lost three of his grandparents in the Holocaust. Let’s just say it’s not obvious that an incurable victim of, say, infantile Tay–Sachs disease, who is going do die around four years old after a chronic pain-ridden existence, is better off alive. We can’t ask this question to the victim: the nature of the disorder means s/he is not cognitively competent to understand the question.
Either way, the case for the expanding circle doesn’t depend on an alleged growth in empathy per se. If, as I think quite likely, we eventually enlarge our sphere of concern to the well-being of all sentience, this outcome may owe as much to the trait of high-AQ hyper-systematising as any widening or deepening compassion. By way of example, consider the work of Bill Gates in cost-effective investments in global health (vaccinations etc) and indeed in: http://www.thegatesnotes.com/Features/Future-of-Food (“the future of meat is vegan”). Not even his greatest admirers would describe Gates as unusually empathetic. But he is unusually rational—and the growth in secular scientific rationalism looks set to continue.
I’m not sure what you mean by “sensitive”, it certainly doesn’t stop him from being at the cutting edge pushing in that direction.
You seem to be confusing expanding the circle of beings we care for and being more efficient in providing that caring.
Cruelty-free in vitro meat can potentially replace the flesh of all sentient beings currently used for food. Yes, it’s more efficient; it also makes high-tech Jainism less of a pipedream.
As I understand the common arguments for legalizing infanticide, it involves weighting the preferences of the parents and society more—not a complete discounting of the infant’s preferences.
Try replacing “infanticide” (and “infant’s”) in that sentence with “killing Jews” or “enslaving Blacks”. Would you also argue that it’s not excluding Jews or Blacks from the circle of compassion?
It seems like a silly question. Practically everyone discounts the preferences of the very young. They can’t vote, and below some age, are widely agreed to have practically no human rights, and are generally eligible for death on parental whim.
Well the same applies even more strongly to animals, but the people arguing for the “expanding circle of compassion” idea like to site vegetarianism as an example of this phenomenon.
Well, sure, but adult human females have preferences too, and they are quite significant ones. An “expanding circle of compassion” doesn’t necessarily imply equal weights for everyone.
So did slave owners.
At the point where A’s inconvenience justifies B’s being killed you’ve effectively generalized the “expanding circle of compassion” idea into meaninglessness.
Sure.
Singer’s obviously right about the “expanding circle”—it’s a real phenomenon. If A is a human and B is a radish, A killing B doesn’t seem too awful. Singer claims newborns are rather like that—in being too young to have much in the way of preferences worthy of respect.
Um, this is precisely the point of disagreement, and given that your next sentence is about the position that babies have the moral worth of radishes I don’t see how you can assert that with a straight face.
I didn’t know that. I normally take this for granted.
Some conventional cites on the topic are: Singer and Dawkins.
You just steelmanned Singer’s position to claiming that babies have the moral worth of radishes, and it hasn’t occurred to you that he might not be the best person to site for arguing for an expanding moral circle?
Sorry, but I have to ask: Are you trolling?
I find it really weird that I don’t recall having seen that piece of rhetoric before. (ETA: Argh, dangerously close to politics here. Retracting this comment.)
I wish I could upvote your retraction.
The closest thing I have seen to this sort of idea is this:
http://www.gwern.net/The%20Narrowing%20Circle
Wow, an excellent essay!
If I remember correctly, I started thinking along these lines after hearing Robert Garland lecture on ancient Egyptian religion. As a side-note to a discussion about how they had little sympathy for the plight of slaves and those in the lower classes of society (since this was all part of the eternal cosmic order and as it should be), he mentioned that they would likely think that we are the cruel ones, since we don’t even bother to feed and cloth the gods, let alone worship them (and the gods, of course, are even more important than mere humans, making our lack of concern all the more horrible).
Any idea where Garland might’ve written that up? All the books listed in your link sound like they’d be on Greece, not Egypt.
It was definitely a lecture, not a book. Maybe I’ll track it down when I get around to Ankifying my Ancient Egypt notes.
It seems beneficial to make sure my understanding of why Pearce’s argument fails matches that of others, even if I don’t need to convince you that it fails.
I interpret imperatives as “you should X,” where the operative word is the “should,” even if the content is the “X.” It is not at all obvious to me why Pearce expects the “should” to be convincing to a paperclipper. That is, I don’t think there is a logical argument from arbitrary premises to adopt a preference for not harming beings that can feel pain, even though the paperclipper may imagine a large number of unconvincing logical arguments whose conclusion is “don’t harm beings that can feel pain if it costless to avoid” on the way to accomplishing its goals.
Perhaps it’s worth distinguishing the Convergence vs Orthogonality theses for: 1) biological minds with a pain-pleasure (dis)value axis. 2) hypothetical paperclippers.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I’m assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference frames.
But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.
As mentioned elsewhere in this thread, it’s not obvious that the circle is actually expanding right now.
This reads to me as “unless we believe conclusion ~X, a strong case can be made for X,” which makes me suspect that I made a parse error.
This is a negative statement: “synthetic superintelligences will not have property A, because they did not come from the savanna.” I don’t think negative statements are as convincing as positive statements: “synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A.”
I suspect that a moral “view from here” will be better at accumulating resources than a moral “view from nowhere,” both now and in the future, for reasons I can elaborate on if they aren’t obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker’s “The Better Angels of Our Nature”. Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn’t (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness—and partial correction—of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I’m making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology