It seems beneficial to make sure my understanding of why Pearce’s argument fails matches that of others, even if I don’t need to convince you that it fails.
the imperative against suffering applies to people and animals whose welfare is not in any way beneficial and sometimes even detrimental to those exhibiting compassion.
I interpret imperatives as “you should X,” where the operative word is the “should,” even if the content is the “X.” It is not at all obvious to me why Pearce expects the “should” to be convincing to a paperclipper. That is, I don’t think there is a logical argument from arbitrary premises to adopt a preference for not harming beings that can feel pain, even though the paperclipper may imagine a large number of unconvincing logical arguments whose conclusion is “don’t harm beings that can feel pain if it costless to avoid” on the way to accomplishing its goals.
Perhaps it’s worth distinguishing the Convergence vs Orthogonality theses for:
1) biological minds with a pain-pleasure (dis)value axis.
2) hypothetical paperclippers.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I’m assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference frames.
But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone.
This reads to me as “unless we believe conclusion ~X, a strong case can be made for X,” which makes me suspect that I made a parse error.
that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah
This is a negative statement: “synthetic superintelligences will not have property A, because they did not come from the savanna.” I don’t think negative statements are as convincing as positive statements: “synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A.”
I suspect that a moral “view from here” will be better at accumulating resources than a moral “view from nowhere,” both now and in the future, for reasons I can elaborate on if they aren’t obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker’s “The Better Angels of Our Nature”. Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn’t (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness—and partial correction—of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I’m making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology
It seems beneficial to make sure my understanding of why Pearce’s argument fails matches that of others, even if I don’t need to convince you that it fails.
I interpret imperatives as “you should X,” where the operative word is the “should,” even if the content is the “X.” It is not at all obvious to me why Pearce expects the “should” to be convincing to a paperclipper. That is, I don’t think there is a logical argument from arbitrary premises to adopt a preference for not harming beings that can feel pain, even though the paperclipper may imagine a large number of unconvincing logical arguments whose conclusion is “don’t harm beings that can feel pain if it costless to avoid” on the way to accomplishing its goals.
Perhaps it’s worth distinguishing the Convergence vs Orthogonality theses for: 1) biological minds with a pain-pleasure (dis)value axis. 2) hypothetical paperclippers.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I’m assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference frames.
But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.
As mentioned elsewhere in this thread, it’s not obvious that the circle is actually expanding right now.
This reads to me as “unless we believe conclusion ~X, a strong case can be made for X,” which makes me suspect that I made a parse error.
This is a negative statement: “synthetic superintelligences will not have property A, because they did not come from the savanna.” I don’t think negative statements are as convincing as positive statements: “synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A.”
I suspect that a moral “view from here” will be better at accumulating resources than a moral “view from nowhere,” both now and in the future, for reasons I can elaborate on if they aren’t obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker’s “The Better Angels of Our Nature”. Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn’t (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness—and partial correction—of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I’m making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology