What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?
Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.
You remind me of myself maybe 15 years ago. Excited about the idea of escaping the human condition through advanced technology, but with the idea of avoiding bad (often apocalyptically bad) outcomes also in the mix; wanting the whole world to get excited about this prospect; writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a “positive Singularity” might be, except a future where the good things happen and the bad things don’t.
So let me see if I have anything coherent to say about such an outlook, from the perspective of 15 years on. I am certainly jaded when it comes to breathless accounts of the incomprehensible transcendence that will occur: the equivalent of all Earth’s history happening in a few seconds, societies of inhuman meta-minds discovering the last secret of how the cosmos works and that’s just the beginning, passages about how a googol intelligent beings will live inside a Planck length and so forth.
If you haven’t seen them, you should pay a visit to Dale Carrico’s writings on “superlative futurology”. Whatever the future may bring, it’s a fact that this excited anticipation of everything good multiplied by a trillion (or terrified anticipation of badness on a similar scale, if we decide to entertain the negative possibilities) is built entirely from imagination. It is not surprising that after more than a decade, I have become skeptical about the value of such emotional states, and also about their realism; or at least, a little bored with them. I find myself trying to place them in historical perspective. 2000 years ago there were gnostics raving about transcendental, sublime hierarchies of gods, and how mind, time, and matter were woven together in strange ways. History and science tell us that all that was mostly just a strange conceptual storm happening in the skulls of a few people who died like anyone else and who made little discernible impact on the course of events—that being reserved more for the worldly actors like the emperors and generals. Yet one has to suppose that gnosticism was not an accident, that it was a symptom of what was happening to culture and to human consciousness at that time.
It seems very possible that a great deal of the ecstasy (leavened with dread) that one finds in singularity and transhumanist writing is similarly just an epiphenomenal symptom of the real processes of the age. Lots of people say that, of course; it’s the capitalist ego running amok, denying ecological limits, a new gnostic body-denial that fetishizes calculating machines, blah blah blah. Such criticisms themselves tend to repress or deny the radicalism of what is happening technologically.
So, OK, there shall be robots, cyborgs, brain implants, artificial intelligence, artificial life, a new landscape of life and mind which gets called postbiological or posthuman but much of which is just hybridization of natural and artificial. All that is a huge development. But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is; millions of subjective years of posthuman civilizations squeezed into a few seconds; and various other quantitative amplifications of life as we know it, by large powers of ten?
I think at best it is rational to give these ideas a chance. These technologies are new, this hasn’t happened before, we don’t know how far it goes; so we might want to remain open to the possibility that almost infinite space and time lie on the other side of this transition. But really, open to the possibility is about all we can say. This hasn’t happened before, and we don’t know what new barriers and pitfalls lie ahead; and it somehow seems unhealthy to be deriving this ecstatic hope from a few exponential numbers.
Something that the critics of extreme transhumanism often fail to note is the highly utopian altruism that exists within the subculture. To be sure, there are many individualist transhumanists who are cynics and survivalists; but there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies (those which possess no trace of the desire to punish or to achieve transformation through violence). It’s the dream of world peace, raised to the nth power, and achieved because there’s no death, scarcity, involuntary work, ageing process, and other such pains and frustrations to drive people mad. I wanted to emphasize this aspect because the critics of singularity thought generally love to explain it by imputing disreputable motives: it’s all adolescent power fantasy and death denial and so forth. There should be a little more respect for this aspect, and if they really think it’s impossible, they should show a little more regret about this. (Incidentally, Carrico, who I mentioned above, addresses this aspect too, saying it’s a type of political infantilism, imagining that conflict and loss can be eliminated from the world.)
The idea of “waking up the world” to the imminence of the Singularity, to its glories and terrors, can have an element of this profoundly unworldly optimism about human nature—along with the more easily recognized aspect of self-glorification: I, and maybe my colleagues and guru figures, am the messenger of something that will gain the attention of the world. I think it can be expected that the world will continue to “wake up” to the dawning possibilities of biological rejuvenation, artificial intelligence, brain emulation, and so on, and that it will do this not just in a sober way, but also with bursts of zany enthusiasm and shuddering terror; and it even makes sense to want to foster the sober advance of understanding, if only we can figure out what’s real and what’s illusion about these anticipations.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the “knowledge” of immortality through mind uploading (just one example)… that, almost certainly, achieves nothing deeply useful. And the expectation that in a few years everyone will agree with the Singularity outlook (I’ve seen this idea expressed most recently by the economist James Miller) I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?! It’s a logical deduction: you understand the possibilities of the Singularity, you don’t understand how anyone could want to reject them or dismiss them, and you observe that most people are not singularity futurists; therefore, you deduce that the idea is about to sweep the world like wildfire, and you just happen to be one of the lucky first to be exposed to it. That thought process is naivety and unfamiliarity with normal psychology. It may partly be due to a person of above-average intelligence not understanding how different their own subjectivity is to that of a normal person; it may also be due to not yet appreciating how incredibly cruel life can be, and how utterly helpless people are against this. The passivity of the human race, its resignation and wishful thinking, its resistance to “good news”, is not an accident. And there is ample precedent for would-be vanguards of the future finding themselves powerless and ignored, while history unfolds in a much duller way than they could have imagined.
So much for the general cautionary lecture. I have two other more specific things to say.
First, it is very possible that the quasi-scientific model of mind which underlies so many of these brave new ideas about copies and mind uploads is simply wrong, a sort of passing historical crudity that will be replaced by something very new. The 19th century offers many examples in physics and biology of paradigms which informed a whole generation of thought and futurology, and which are now dead and forgotten. Computing hardware is a fact, but consciousness in a program is not yet a fact and may never be a fact. I’ve posted a lot about this here.
Second, since you’re here, you really should think about whether something like the SIAI notion of friendly singularity really is the only natural way to achieve a “global positive singularity”. The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to promote the idea of immortality through mind uploading, or reverse engineering the brain. I think it’s a genuine conceptual advance on the older idea of hoping to ride the technological wave to a happy ending, just by energetic engagement with new developments and a will to do whatever is necessary. We still don’t know if the premises of such futurisms are valid, but if they are accepted as such, then the SIAI strategy is a very reasonable one.
writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a “positive Singularity” might be, except a future where the good things happen and the bad things don’t.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the “just like me 15 years ago.” But I’m not trying to take the argumentative stance by saying this, I’m just requesting it: I value your outlook.
Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.
But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is;
Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there’s a reasonable argument that its net utility was greater before the arrival of language and technology.
Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)
If you haven’t seen them, you should pay a visit to Dale Carrico’s writings on “superlative futurology”.
I will have to investigate Carrico’s “superlative futurology”.
Imagination guides human future. If we couldn’t imagine the future, we wouldn’t be able to steer the present towards it.
there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies
Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.
Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.
Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.
AGI is a technology that could amplify ‘our’ knowledge and capability to such a degree that it could literally enable ‘us’ to shape our reality in any way ‘we’ can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.
However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.
The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.
In short, many people when they hear about the Singularity reach this irrational conclusion—“that sounds like religious eschatologies I’ve heard before, therefore its just another instance of that”. You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.
I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.
I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the “knowledge” of immortality through mind uploading (just one example)… that, almost certainly, achieves nothing deeply useful.
Naturally, I strongly disagree, but I’m confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.
I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?
Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.
I’ve personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.
I was once at a party at some film producer’s house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.
Computing hardware is a fact, but consciousness in a program is not yet a fact and
I’ve yet to see convincing arguments showing “consciousness in a program is impossible”, and at the moment I don’t assign special value to consciousness as distinguishable from human level self-awareness and intelligence.
The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to
My position is not to just “promote the idea of immortality through mind uploading, or reverse engineering the brain”—those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI’s current mission.
To put it another way: whose utility function?
To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.
I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as “humane future of humanity” will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as “dog” are orders of magnitude higher than anything that could be cleanly represented in human code.
However, the above criticism is in the particulars of implementation, and doesn’t cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I’d rather support a project exploring multiple routes, and brain-like routes in particular—not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle.
Ironically, the idea involves reverse-engineering the brain—specifically, reverse-engineering the basis of human moral and metamoral cognition. One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history, and the genetics and life history of the individual, and then extrapolate it until it stabilizes. That is, the moral and metamoral cognition of our species is held to instantiate a self-modifying decision theory, and the human race has not yet had the time or knowledge necessary to take that process to its conclusion. The ethical heuristics and philosophies that we already have are to be regarded as approximations of the true theory of right action appropriate to human beings. CEV is about outsourcing this process to an AI which will do neuroscience, discover what we truly value and meta-value, and extrapolate those values to their logical completion. That is the utility function a friendly AI should follow.
I’ll avoid returning to the other issues for the moment since this is the really important one.
I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history,
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity’s future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV—people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show—but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god’s task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI’s initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what’s good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there’s little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.
Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.
You remind me of myself maybe 15 years ago. Excited about the idea of escaping the human condition through advanced technology, but with the idea of avoiding bad (often apocalyptically bad) outcomes also in the mix; wanting the whole world to get excited about this prospect; writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a “positive Singularity” might be, except a future where the good things happen and the bad things don’t.
So let me see if I have anything coherent to say about such an outlook, from the perspective of 15 years on. I am certainly jaded when it comes to breathless accounts of the incomprehensible transcendence that will occur: the equivalent of all Earth’s history happening in a few seconds, societies of inhuman meta-minds discovering the last secret of how the cosmos works and that’s just the beginning, passages about how a googol intelligent beings will live inside a Planck length and so forth.
If you haven’t seen them, you should pay a visit to Dale Carrico’s writings on “superlative futurology”. Whatever the future may bring, it’s a fact that this excited anticipation of everything good multiplied by a trillion (or terrified anticipation of badness on a similar scale, if we decide to entertain the negative possibilities) is built entirely from imagination. It is not surprising that after more than a decade, I have become skeptical about the value of such emotional states, and also about their realism; or at least, a little bored with them. I find myself trying to place them in historical perspective. 2000 years ago there were gnostics raving about transcendental, sublime hierarchies of gods, and how mind, time, and matter were woven together in strange ways. History and science tell us that all that was mostly just a strange conceptual storm happening in the skulls of a few people who died like anyone else and who made little discernible impact on the course of events—that being reserved more for the worldly actors like the emperors and generals. Yet one has to suppose that gnosticism was not an accident, that it was a symptom of what was happening to culture and to human consciousness at that time.
It seems very possible that a great deal of the ecstasy (leavened with dread) that one finds in singularity and transhumanist writing is similarly just an epiphenomenal symptom of the real processes of the age. Lots of people say that, of course; it’s the capitalist ego running amok, denying ecological limits, a new gnostic body-denial that fetishizes calculating machines, blah blah blah. Such criticisms themselves tend to repress or deny the radicalism of what is happening technologically.
So, OK, there shall be robots, cyborgs, brain implants, artificial intelligence, artificial life, a new landscape of life and mind which gets called postbiological or posthuman but much of which is just hybridization of natural and artificial. All that is a huge development. But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is; millions of subjective years of posthuman civilizations squeezed into a few seconds; and various other quantitative amplifications of life as we know it, by large powers of ten?
I think at best it is rational to give these ideas a chance. These technologies are new, this hasn’t happened before, we don’t know how far it goes; so we might want to remain open to the possibility that almost infinite space and time lie on the other side of this transition. But really, open to the possibility is about all we can say. This hasn’t happened before, and we don’t know what new barriers and pitfalls lie ahead; and it somehow seems unhealthy to be deriving this ecstatic hope from a few exponential numbers.
Something that the critics of extreme transhumanism often fail to note is the highly utopian altruism that exists within the subculture. To be sure, there are many individualist transhumanists who are cynics and survivalists; but there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies (those which possess no trace of the desire to punish or to achieve transformation through violence). It’s the dream of world peace, raised to the nth power, and achieved because there’s no death, scarcity, involuntary work, ageing process, and other such pains and frustrations to drive people mad. I wanted to emphasize this aspect because the critics of singularity thought generally love to explain it by imputing disreputable motives: it’s all adolescent power fantasy and death denial and so forth. There should be a little more respect for this aspect, and if they really think it’s impossible, they should show a little more regret about this. (Incidentally, Carrico, who I mentioned above, addresses this aspect too, saying it’s a type of political infantilism, imagining that conflict and loss can be eliminated from the world.)
The idea of “waking up the world” to the imminence of the Singularity, to its glories and terrors, can have an element of this profoundly unworldly optimism about human nature—along with the more easily recognized aspect of self-glorification: I, and maybe my colleagues and guru figures, am the messenger of something that will gain the attention of the world. I think it can be expected that the world will continue to “wake up” to the dawning possibilities of biological rejuvenation, artificial intelligence, brain emulation, and so on, and that it will do this not just in a sober way, but also with bursts of zany enthusiasm and shuddering terror; and it even makes sense to want to foster the sober advance of understanding, if only we can figure out what’s real and what’s illusion about these anticipations.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the “knowledge” of immortality through mind uploading (just one example)… that, almost certainly, achieves nothing deeply useful. And the expectation that in a few years everyone will agree with the Singularity outlook (I’ve seen this idea expressed most recently by the economist James Miller) I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?! It’s a logical deduction: you understand the possibilities of the Singularity, you don’t understand how anyone could want to reject them or dismiss them, and you observe that most people are not singularity futurists; therefore, you deduce that the idea is about to sweep the world like wildfire, and you just happen to be one of the lucky first to be exposed to it. That thought process is naivety and unfamiliarity with normal psychology. It may partly be due to a person of above-average intelligence not understanding how different their own subjectivity is to that of a normal person; it may also be due to not yet appreciating how incredibly cruel life can be, and how utterly helpless people are against this. The passivity of the human race, its resignation and wishful thinking, its resistance to “good news”, is not an accident. And there is ample precedent for would-be vanguards of the future finding themselves powerless and ignored, while history unfolds in a much duller way than they could have imagined.
So much for the general cautionary lecture. I have two other more specific things to say.
First, it is very possible that the quasi-scientific model of mind which underlies so many of these brave new ideas about copies and mind uploads is simply wrong, a sort of passing historical crudity that will be replaced by something very new. The 19th century offers many examples in physics and biology of paradigms which informed a whole generation of thought and futurology, and which are now dead and forgotten. Computing hardware is a fact, but consciousness in a program is not yet a fact and may never be a fact. I’ve posted a lot about this here.
Second, since you’re here, you really should think about whether something like the SIAI notion of friendly singularity really is the only natural way to achieve a “global positive singularity”. The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to promote the idea of immortality through mind uploading, or reverse engineering the brain. I think it’s a genuine conceptual advance on the older idea of hoping to ride the technological wave to a happy ending, just by energetic engagement with new developments and a will to do whatever is necessary. We still don’t know if the premises of such futurisms are valid, but if they are accepted as such, then the SIAI strategy is a very reasonable one.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the “just like me 15 years ago.” But I’m not trying to take the argumentative stance by saying this, I’m just requesting it: I value your outlook.
Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.
Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there’s a reasonable argument that its net utility was greater before the arrival of language and technology.
Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)
I will have to investigate Carrico’s “superlative futurology”.
Imagination guides human future. If we couldn’t imagine the future, we wouldn’t be able to steer the present towards it.
Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.
Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.
Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.
AGI is a technology that could amplify ‘our’ knowledge and capability to such a degree that it could literally enable ‘us’ to shape our reality in any way ‘we’ can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.
However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.
The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.
In short, many people when they hear about the Singularity reach this irrational conclusion—“that sounds like religious eschatologies I’ve heard before, therefore its just another instance of that”. You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.
I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.
I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.
Naturally, I strongly disagree, but I’m confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.
Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.
I’ve personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.
I was once at a party at some film producer’s house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.
I’ve yet to see convincing arguments showing “consciousness in a program is impossible”, and at the moment I don’t assign special value to consciousness as distinguishable from human level self-awareness and intelligence.
My position is not to just “promote the idea of immortality through mind uploading, or reverse engineering the brain”—those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI’s current mission.
To put it another way: whose utility function?
To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.
I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as “humane future of humanity” will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as “dog” are orders of magnitude higher than anything that could be cleanly represented in human code.
However, the above criticism is in the particulars of implementation, and doesn’t cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I’d rather support a project exploring multiple routes, and brain-like routes in particular—not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.
Ironically, the idea involves reverse-engineering the brain—specifically, reverse-engineering the basis of human moral and metamoral cognition. One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history, and the genetics and life history of the individual, and then extrapolate it until it stabilizes. That is, the moral and metamoral cognition of our species is held to instantiate a self-modifying decision theory, and the human race has not yet had the time or knowledge necessary to take that process to its conclusion. The ethical heuristics and philosophies that we already have are to be regarded as approximations of the true theory of right action appropriate to human beings. CEV is about outsourcing this process to an AI which will do neuroscience, discover what we truly value and meta-value, and extrapolate those values to their logical completion. That is the utility function a friendly AI should follow.
I’ll avoid returning to the other issues for the moment since this is the really important one.
I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity’s future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV—people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show—but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god’s task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI’s initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what’s good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there’s little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.
I liked this, will reply when I have a chance.