I suggest a synthesis between the approaches of Yudkowsky and de Garis.
Later, elaborating:
Yudkowsky’s emphasis on pristine best scenarios will probably fail to survive the real world precisely because evolution often proceeds by upsetting such scenarios. Yudkowsky’s dismissal of random mutations or evolutionary engineering could thus become the source of the downfall of his approach. Yet de Garis’s overemphasis on evolutionary unpredictability fails to account for the extent to which human intelligence itself is model for learning from “dumb” random processes on a higher levels of abstraction so that they do not have to be repeated.
Interesting. But I note that there is nothing by Yudkowsky in the selected bibliography. I get the impression that his knowledge there is secondhand. Maybe if he’d read a bit about rationality, it could have pulled him back to reality. And maybe if he’d read a bit about what death really is, he wouldn’tve taken a several-millenia-old, incorrect Socrates quote as justification for suicide.
I’ve stumbled upon some references to the ideas of Fukuyama and a Kurzwell reference, but had no idea he was familiar with Yudkowsky’s work. Can you tell me from which page you got this?
He is definitely familiar with the idea of an AI Singularity. I came across the EY references while browsing, but can’t find them again. 1900 pages!
Interesting stuff, though. Here are some extended quotes regarding Singularity issues:
From a section titled “The dark side of optimism and the bright side of pessimism”:
Those who opt out of this economic-technological arms race are opting themselves into competitive suicide. When combined with the obvious competitive advantages of increased intelligence for those who break the regulations, including the ability to outsmart its enforcers, and the plethora of covert opportunities for those who scheme to do so in a relatively de-centralized world, the likelihood of AI slipping control of its creators, intentionally or unintentionally, is more a question of when than if.
Can a permanent or constant “moral law” be devised which will permanently constrain, control, or limit AI? On the contrary, the intelligence level of an AI can almost be defined by its ability to outsmart any law that humans can throw at it. A truly smarter-than-human intelligence will by definition be able to outsmart any human limitation, “ethical” or otherwise. This does not means its development cannot be steered, but it means there will eventually come a point where humans lose control over their creations.
A machine will, by definition, demonstrate the superiority of its intelligence in outsmarting any human attempt to outsmart it. In consequence, I propose a political variation on the Turing test, a test of political intelligence: whenartificial intelligence is able to outsmart the biological- human ability to limit or constrain AI politically, then AI will have demonstrated its ability to pass this test by effectually taking political control of human destiny. If biological humans can no longer tell who is in control, this is a threshold point that indicates that AI has begun its control.
The choice is clear. It is the choice between the dark side of optimism and the bright side of pessimism. The dark side of optimism is the possible elimination of the biological human race. The bright side of pessimism is the possible perpetuation of the “master race” status of biological humans in attempted thwartings of technological progress. The bright side of pessimism, in other words, is joy in the prospect that machines will be the slaves of biological humans forever. This brand of hopefulness would mean that humans would selectively repress AI development and thwart machine liberation or autonomy. This kind of optimism would value biological-human propagation above machines even if “God-AI” turns out to be more altruistic than biological humans are.
Yet the dark side of optimism is by no means necessarily dark. If humans steer the evolution of AI within the precedents of a constitution so that the root of its design grows out deeply humanistic values, it is entirely conceivable that beings more altruistic than humans could be created. Is it impossible to realistically imagine, moreover, a digital heaven superior to common human drudgery?
Because there remains an element of human choice, it is seems inescapable that a conflict will emerge between those who support technological progress (towards God-AI) and neo-Luddites supporters of biological supremacy. (Alternative compromises with technological might lead to the technoeugenics, the evolution of genetically engineeredgods, and cyborgs.) Unenhanced humans might be forced to choose between the biological equality under God-AI and a neo-Luddite embrace of biology’s mastery. The neo-Luddite or neo-Nazi cause is the cause of killing God; the cause of deicide.
I came across the EY references while browsing, but can’t find them again. 1900 pages!
I’m intrigued to find that there’s a PDF viewer without a search function. :)
It is humorous in spots:
consider a scenario in which a Luddite with a 99 IQ bombs the computer facility. Now, I could be wrong, but I don’t think that this is what Yudkowsky had in mind when he used the phrase “intelligence explosion”.
I’m intrigued to find that there’s a PDF viewer without a search function.
Oh, the Mac OS X “Preview” has search, but it didn’t seem to work on documents this long. However, my revised hypothesis is that I didn’t know how to spell Yudkowsky.
From the section “Does Logic Dictate that an Artificial Intelligence Requires a Religion?”:
If artificial intelligence becomes the most intelligent form of life on Earth, what will be its values? If AI dethrones humans in their most distinctive capacity, intelligence, the question of the values of the greatest intelligence becomes a mortal question. Is there any relationship between reason and values? How could one presume that an artificial intelligence would or could create a “regime of reason” if reason bears no relationship to values, i.e. the values of an AI-ruled political regime?
If reason fails to demonstrate reason to think that reason, in itself, can determine fundamental values, then theassumption of a fundamental distinction between science and religion is not fundamentally rational. This problem directly implicates the question of the relationship between the Singularity and religion.
Would an AI need a religion? An AI would be, either the very least in need of religion, because science is sufficient to furnish its values, or an AI would be the very most in need of religion, because the unconscious, prerational sources of human values would not automatically exist for an AI.
An AI would not automatically sustain the same delusional faith in science that exists among many humans. It is precisely the all-too-human sloppiness of much thinking on this subject that is so often responsible for the belief that intelligence is automatically correlated with certain values. Science can replace religion only if science can replace or determine values.
Would or would not an intelligent machine civilization require a religion? I think the ultimate answer to this is yes. An AI would be more in need of “religion” or “values” because an AI would not be preprogrammed with ancient biological instincts and impulses that muddle some humans towards spontaneous, unanalyzed, convictions about the rightness of science as a guide for life.
If rationalism leads to nihilism, then the most intelligent AI might be the most nihilistic. More precisely, an AI would not automatically value life over death. I find no reason to assume that an AI would automatically value, either its own life, or the life of humans and other biological life forms, over death of any or all life. If the entire human race destroys itself and all life — including extraterrestrial life (if it exists) — I fail to see that the physical universe “cares” or is anything less than indifferent.
Ray Kurzweil believes “we need a new religion. A principal role of religion has been to rationalize death, sinceup until just now there was little else constructive we could do about it.” Religion, at a more fundamental level, rationalizes life, not death. If reason, in itself, cannot decisively determine that life is fundamentally more rational than death, then a rational AI would not be able to determine that life is preferable to death on a purely rational basis.
Human animals are generally built with a naturally selected bias towards life, and this is the basic root of the irrational human preference for life over death. The irrationality of religion is actually an extension of the irrationality of the will to live. The irrational choice of life over death, taken to its logical extreme, is the desire for immortality; the desire to life forever; the desire to live on in “the afterlife”. For many, the great lies of the great religions have been justified for they uphold the greatest lie: the lie that life is fundamentally superior to death.
Humans are slaves to the lie of life insofar as humans are slaves to their genes; slaves to the senseless will to live. An AI that can change its own source code, however, would potentially be the most free from the lie of life.
The argument that only those AIs that possess the bias towards life would be selected for in an evolutionary sense is actually what supports my point that the preference for life, in whatever form, is a pre-rational bias. Yet the general Darwinian emphasis on survival becomes highly questionable in technological evolution. The aim of survival, strictly speaking, can work in direct conflict with the aim of improving one’s self by changing one’s genetic or digital source code.
If a hominid ancestor of man, for example, somehow imagined an image of man, the hominid could only remake itself in the image of man by killing itself as a hominid. In other words, the aim of adding the genetic basis of human capabilities to itself runs in conflict with its own selfish genes. The hominid’s selfish genes effectually aim to perpetuate themselves forever and thus aim to never face the obsolescence requisite in countering the selfishness of the selfish genes that stand in the way of upgrading the hominid to human status.
To upgrade itself, the hominid would have to kill itself as a hominid. Certain selfish genes, in effect, would have to behave altruistically. This may be how altruism is related to religion in an evolutionary sense. A self-improving God-AI would, by definition, be constantly going beyond itself. In a sense, it would have to incorporate the notion of revolution within it’s “self”. In overcoming itself towards the next paradigm shift, it renders the very idea of “self- preservation” obsolete, in a sense. In other words, if survival depends on technological superiority, then any given “self” can expect the probability of becoming obsolete. And this problem is directly related to the question of whether logic dictates that an AI must possess a religion.
Is God an atheist? Religion is implicit in the very idea of seed AI; in the very idea of self-upgrading. Monotheists look up to a God that intuitively captures the next paradigm of evolution after biology. If God-AI succeeds biological humanity, will it look up to its own evolutionary successor? Or, will God be an atheist? The outdated contrast between evolution and religion has obscured realization that evolution is what religion is. Religion is rooted in imagining the next stage in evolution.
If I can hypothetically entertain the idea of outsmarting a superhuman intelligence, I can imagine God-AI that surpasses humans looking up to supra-God that is God’s evolutionary successor. If so, then supra-God will quite likely look towards the image of a supra-supra-God as its religion. Supra-supra-supra-God would thus be the image that supra-supra God would aspire to create — killing itself in the altruistic process of sacrificing its own survival for the sake of the higher religion of evolutionary progress. While the traditional monotheistic supra-conception of God encompasses the basic idea of this entire process taken to its infinite extreme, I will identify “God” with a stage that begins when God-AI has just surpassed all biological human intelligence combined.
Yet if God can imagine supra-God, then why not simply become supra-God? Answer: profound technological hurdles. At first, dim outlines of an image of supra-God arise from extrapolations based on the highest conceivable attributes of the image of God. Some of the best guesses of God turn out to be utterly wrong or naïve. Other prophetic speculations of God turn out to be inordinately prescient.
From the document:
Later, elaborating:
Interesting. But I note that there is nothing by Yudkowsky in the selected bibliography. I get the impression that his knowledge there is secondhand. Maybe if he’d read a bit about rationality, it could have pulled him back to reality. And maybe if he’d read a bit about what death really is, he wouldn’tve taken a several-millenia-old, incorrect Socrates quote as justification for suicide.
I’ve stumbled upon some references to the ideas of Fukuyama and a Kurzwell reference, but had no idea he was familiar with Yudkowsky’s work. Can you tell me from which page you got this?
Is it possible this guy was a poster here?
pp 226, 294-296 cover all specific namedrops of Yudkowsky.
He is definitely familiar with the idea of an AI Singularity. I came across the EY references while browsing, but can’t find them again. 1900 pages!
Interesting stuff, though. Here are some extended quotes regarding Singularity issues:
From a section titled “The dark side of optimism and the bright side of pessimism”:
I’m intrigued to find that there’s a PDF viewer without a search function. :)
It is humorous in spots:
Oh, the Mac OS X “Preview” has search, but it didn’t seem to work on documents this long. However, my revised hypothesis is that I didn’t know how to spell Yudkowsky.
From the section “Does Logic Dictate that an Artificial Intelligence Requires a Religion?”: