I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history,
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity’s future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV—people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show—but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god’s task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI’s initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what’s good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there’s little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.
I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity’s future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV—people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show—but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god’s task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI’s initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what’s good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there’s little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.