Let us define a singularitarian as a person who considers it likely that some form of smarter than human intelligence will be developed in a characteristic timeframe of a century, and that the manner in which this event occurs is important enough to expend effort altering. Given this definition, it is perfectly possible to be a bioconservative singularitarian—that is someone who:
opposes genetic modification of food crops, the cloning and genetic engineering of livestock and pets, and, most prominently, rejects the genetic, prosthetic, and cognitive modification of human beings to overcome what are broadly perceived as current human biological and cultural limitations.
- one can accept the (at present only suggestive) factual arguments of Hanson, Yudkowsky, Bostrom etc that smarter than human intelligence is the only long-term alternative to human extinction (this is what one might call an “attractor” argument—that our current state simply isn’t stable), whilst taking the axiological and ethical position that our pristine, unenhanced human form is to be held as if it were sacred, and that any modification and/or enhancement of the human form is to be resisted, even if the particular human in question wants to be enhanced. A slighly more individual-freedoms-oriented bioconservative position would be try very hard to persuade people (subject to certain constraints) to decide not to enhance themselves, or to allow people to enhance themselves only if they are prepared to face derision and criticism from society. A superintelligent singleton could easily implement such a society.
This position seems internally consistent to me, and given the seemingly unstoppable march of technological advancement and its rapid integration into our society (smartphones, facebook, online dating, youtube, etc) via corporate and economic pressure, bioconservative singularitarianism may become the only realistic bioconservative position.
One can even paint a fairly idyllic bioconservative world where human enhancement is impossible and people don’t interact with advanced technology any more, they live in some kind of rural or hunter-gatherer world where the majority of suffering and disease (apart from death, perhaps) is eliminated by a superintelligent singleton, and the singleton takes care to ensure that this world is not “disturbed” by too much technology being invented by anyone. Perhaps people live in a way that is rather like one would have found on a Tahiti before Europeans got there. There are plenty of people who think that they already live in such a world—they are called theists, and they are mistaken (more about this in another post).
For those with a taste for a little more freedom and a light touch of enhancement, we can define biomoderate singularitarianism, which differs from the above in that it sits somewhere more towards the “risque” end of the human enhancement spectrum, but it isn’t quite transhumanism. As before, we consider a superintelligent singleton running the practical aspects of a society and most of the people in that society being somehow encouraged or persuaded not to enhance themselves too much, so that the society remains a clearly human one. I would consider Banks’Cultureto be the prototypical early result of a biomoderate singularity, followed by such incremental changes as one might expect due to what Yudkowsky calls “heaven of the tired peasant” syndrome—many people would get bored of “low-grade” fun after a while. Note that in the Culture, Banks describes people with significant emotional enhancements and the ability to change gender—so this certainly isn’t bioconservative, but the fundaments of human existence are not being pulled apart by such radical developments as mind merging, uploading, wireheading or super-fast radical cognitive enhancement.
Bioconservative and biomoderate singularities are compatible with modern environmentalism, in that the power of a superintelligent AI could be used to eliminate damage to the natural world, and humans could live in almost perfect harmony with nature. Harmony with nature would involve a superintelligence carefully managing biological ecosystems and even controlling the actions of individual animals, plants and microorganisms, as well as informing and guiding the actions of human societie(s) so that no human was ever seriously harmed by any creature (no-one gets infected by parasites, bacteria or viruses (unless they want to be), no-one is killed by wild animals), and no natural ecosystem is seriously harmed by human activity. A variant on this would have all wild animals becoming tame, so that you could stroll through the forest and pet a wildcat.
A biomoderate singularity is an interesting concept to consider, and I think it has some interesting applications to a Freindly AI strategy. It is also, I feel, something that I think will be somewhat easier to sell to most other humans around than a full-on, shock level 4, radical transhumanist singularity. In fact we can frame the concept of a “biomoderate technological singularity” in fairly normal language: it is simply a very carefully designed self-improving computer system that is used to eliminate the need for humans to do work that they don’t (all things considered) want to do.
One might well ask: what does this post have to do with instrumental rationality? Well, due to various historical co-incidences, the same small group of people who popularized technologically enabled bio-radical stances such as gender swapping, uploading, cryopreservation, etc also happen to be the people who popularized ideas about smarter than human intelligence. When one small, outspoken group proposes two ideas which sound kind of similar, the rest of the world is highly likely to conflate them.
The situation on the ground is that one of these ideas has a viable politico-cultural future, and the other one doesn’t: “bioradical” human modification activates so many “yuck” factors that getting it to fly with educated, secular people is nigh-on impossible, never mind the religious lot. The notion that smarter-than-human intelligence will likely be developed, and that we should try to avoid getting recycled as computronium is a stretch, but at least it involves only nonobvious factual claims and obvious ethical claims.
It is thus an important rationalist task to separate out these two ideas and make it clear to people that singularitarianism doesn’t imply bioradicalism.
Bioconservative and biomoderate singularitarian positions
Let us define a singularitarian as a person who considers it likely that some form of smarter than human intelligence will be developed in a characteristic timeframe of a century, and that the manner in which this event occurs is important enough to expend effort altering. Given this definition, it is perfectly possible to be a bioconservative singularitarian—that is someone who:
- one can accept the (at present only suggestive) factual arguments of Hanson, Yudkowsky, Bostrom etc that smarter than human intelligence is the only long-term alternative to human extinction (this is what one might call an “attractor” argument—that our current state simply isn’t stable), whilst taking the axiological and ethical position that our pristine, unenhanced human form is to be held as if it were sacred, and that any modification and/or enhancement of the human form is to be resisted, even if the particular human in question wants to be enhanced. A slighly more individual-freedoms-oriented bioconservative position would be try very hard to persuade people (subject to certain constraints) to decide not to enhance themselves, or to allow people to enhance themselves only if they are prepared to face derision and criticism from society. A superintelligent singleton could easily implement such a society.
This position seems internally consistent to me, and given the seemingly unstoppable march of technological advancement and its rapid integration into our society (smartphones, facebook, online dating, youtube, etc) via corporate and economic pressure, bioconservative singularitarianism may become the only realistic bioconservative position.
One can even paint a fairly idyllic bioconservative world where human enhancement is impossible and people don’t interact with advanced technology any more, they live in some kind of rural or hunter-gatherer world where the majority of suffering and disease (apart from death, perhaps) is eliminated by a superintelligent singleton, and the singleton takes care to ensure that this world is not “disturbed” by too much technology being invented by anyone. Perhaps people live in a way that is rather like one would have found on a Tahiti before Europeans got there. There are plenty of people who think that they already live in such a world—they are called theists, and they are mistaken (more about this in another post).
For those with a taste for a little more freedom and a light touch of enhancement, we can define biomoderate singularitarianism, which differs from the above in that it sits somewhere more towards the “risque” end of the human enhancement spectrum, but it isn’t quite transhumanism. As before, we consider a superintelligent singleton running the practical aspects of a society and most of the people in that society being somehow encouraged or persuaded not to enhance themselves too much, so that the society remains a clearly human one. I would consider Banks’ Culture to be the prototypical early result of a biomoderate singularity, followed by such incremental changes as one might expect due to what Yudkowsky calls “heaven of the tired peasant” syndrome—many people would get bored of “low-grade” fun after a while. Note that in the Culture, Banks describes people with significant emotional enhancements and the ability to change gender—so this certainly isn’t bioconservative, but the fundaments of human existence are not being pulled apart by such radical developments as mind merging, uploading, wireheading or super-fast radical cognitive enhancement.
Bioconservative and biomoderate singularities are compatible with modern environmentalism, in that the power of a superintelligent AI could be used to eliminate damage to the natural world, and humans could live in almost perfect harmony with nature. Harmony with nature would involve a superintelligence carefully managing biological ecosystems and even controlling the actions of individual animals, plants and microorganisms, as well as informing and guiding the actions of human societie(s) so that no human was ever seriously harmed by any creature (no-one gets infected by parasites, bacteria or viruses (unless they want to be), no-one is killed by wild animals), and no natural ecosystem is seriously harmed by human activity. A variant on this would have all wild animals becoming tame, so that you could stroll through the forest and pet a wildcat.
A biomoderate singularity is an interesting concept to consider, and I think it has some interesting applications to a Freindly AI strategy. It is also, I feel, something that I think will be somewhat easier to sell to most other humans around than a full-on, shock level 4, radical transhumanist singularity. In fact we can frame the concept of a “biomoderate technological singularity” in fairly normal language: it is simply a very carefully designed self-improving computer system that is used to eliminate the need for humans to do work that they don’t (all things considered) want to do.
One might well ask: what does this post have to do with instrumental rationality? Well, due to various historical co-incidences, the same small group of people who popularized technologically enabled bio-radical stances such as gender swapping, uploading, cryopreservation, etc also happen to be the people who popularized ideas about smarter than human intelligence. When one small, outspoken group proposes two ideas which sound kind of similar, the rest of the world is highly likely to conflate them.
The situation on the ground is that one of these ideas has a viable politico-cultural future, and the other one doesn’t: “bioradical” human modification activates so many “yuck” factors that getting it to fly with educated, secular people is nigh-on impossible, never mind the religious lot. The notion that smarter-than-human intelligence will likely be developed, and that we should try to avoid getting recycled as computronium is a stretch, but at least it involves only nonobvious factual claims and obvious ethical claims.
It is thus an important rationalist task to separate out these two ideas and make it clear to people that singularitarianism doesn’t imply bioradicalism.
See Also: Amputation of Destiny