Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone.
This reads to me as “unless we believe conclusion ~X, a strong case can be made for X,” which makes me suspect that I made a parse error.
that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah
This is a negative statement: “synthetic superintelligences will not have property A, because they did not come from the savanna.” I don’t think negative statements are as convincing as positive statements: “synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A.”
I suspect that a moral “view from here” will be better at accumulating resources than a moral “view from nowhere,” both now and in the future, for reasons I can elaborate on if they aren’t obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker’s “The Better Angels of Our Nature”. Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn’t (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness—and partial correction—of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I’m making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology
This reads to me as “unless we believe conclusion ~X, a strong case can be made for X,” which makes me suspect that I made a parse error.
This is a negative statement: “synthetic superintelligences will not have property A, because they did not come from the savanna.” I don’t think negative statements are as convincing as positive statements: “synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A.”
I suspect that a moral “view from here” will be better at accumulating resources than a moral “view from nowhere,” both now and in the future, for reasons I can elaborate on if they aren’t obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker’s “The Better Angels of Our Nature”. Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn’t (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness—and partial correction—of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I’m making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology