The man who took his own life on Harvard’s campus Saturday left a 1,904-page suicide note online.
According to the Harvard Crimson, Mitchell Heisman wrote “Suicide Note,” posted at http://suicidenote.info, while living in an apartment near the school. The note is a “sprawling series of arguments that touch upon historical, religious and nihilist themes,” his mother, Lonni Heisman, told the Crimson. She said her son would have wanted people to know about his work.
The complex note, divided into four parts, touches on Christianity, the Holocaust and social progress, among other topics, and mentions Harvard several times.
IvyGate calls the note “probing, deeply researched, and often humorous.”
Heisman was 35 when he shot himself on the steps of Harvard’s Memorial Church Saturday. He had a bachelor’s degree in psychology from the University of Albany. According to the Crimson, he worked in area bookstores and lived on inheritance from his father, who died when he was young.
I’ve begun skimming a few of the chapters (the titles aren’t anything if not provocative). On the one hand I am quite predisposed to view the entire work as mostly bunk, because manifestos of this nature often are. However on the other hand, the idea of a philosopher driven to death by his learning is a stimulating archetype enough for me to explore this. And yes I know that considering he quotes:
Ordinary people seem not to realize that those who really apply themselves in the right way to philosophy are directly and of their own accord preparing themselves for dying and death. If this is true, and they have actually been looking forward to death all their lives, it would of course be absurd to be troubled when the thing comes for which they have so long been preparing and looking forward.
—SOCRATES, PHAEDO
Its certain he was playing on that.
I’ve decided to post this here for rationality detox so I don’t pick up any craziness (I’d wager a high probability of there being some there).
He seems to have developed what he terms a sociobiolgical analysis of the history of liberal democracy, reminiscent so far in parts of Nietzsche’s Genealogy of Morals. This judging by a few excerpts of the ending chapter culminates in a kind of singularitarian view and the inevitability of human extinction at the hands of our self created transhuman Gods.
If my hypothesis is correct, this work will be repressed. It should not be surprising if justice is not done to the evidence presented here. It should not be unexpected that these arguments will not be given a fair hearing. It is not unreasonable to think that this work will not be judged on its merits.
This is obviously false—it’s up on the internet, it’s gotten some press coverage, it quite obviously has not been repressed. But he is right that it won’t be judged on its merits, because it’s so long that reading it represents a major time commitment, and his suicide taints it with an air of craziness; together, these ensure that very few people will do more than lightly skim it.
The sad thing is, if this guy had simply talked to others as he went along—published his writing a chapter at a time on a blog, or something—he probably could’ve made a real contribution, with a real impact. Instead, he seems to have gone for 1904 pages with no feedback with which to correct misconceptions, and the result is that he went seriously off the rails.
I just skimmed a few random pages of the book, and ran into this stunning passage:
Marx’s improbable claim that economic-material development will ultimately trump the need for elite human leaders may turn out to be a point on which he was right. What Marx failed to anticipate is that capitalism is driving economic-technological evolution towards the development of artificial intelligence. The advent of greater-than-human artificial intelligence is the decisive piece of the puzzle that Marx failed to account for. Not the working class [as Marx believed—V.], and not a human elite [as Lenin believed—V.], but superhuman intelligent machines may provide the conditions for “revolution”. [...] If this is correct, the first signs of evidence may be unprecedented levels of permanent unemployment as automation increasingly replaces human workers. While this development may begin to require a new form of socialism to sustain demand, artificial intelligence will ultimately provide an alternative to “the dictatorship of the proletariat.” [...] The creation of an artificial intelligence trillions of times greater than all human intelligence combined is not simply the advent of another shiny new gadget. The difference between Leninism-Stalinism and the potential of AI can be compared to the difference between Caesar and God.
The small part of the book I’ve seen so far sounds lucid and without any signs of craziness, and based on this passage, I would guess that there is whole lot of interesting stuff in there. I’ll try reading more as time permits.
I suggest a synthesis between the approaches of Yudkowsky and de Garis.
Later, elaborating:
Yudkowsky’s emphasis on pristine best scenarios will probably fail to survive the real world precisely because evolution often proceeds by upsetting such scenarios. Yudkowsky’s dismissal of random mutations or evolutionary engineering could thus become the source of the downfall of his approach. Yet de Garis’s overemphasis on evolutionary unpredictability fails to account for the extent to which human intelligence itself is model for learning from “dumb” random processes on a higher levels of abstraction so that they do not have to be repeated.
Interesting. But I note that there is nothing by Yudkowsky in the selected bibliography. I get the impression that his knowledge there is secondhand. Maybe if he’d read a bit about rationality, it could have pulled him back to reality. And maybe if he’d read a bit about what death really is, he wouldn’tve taken a several-millenia-old, incorrect Socrates quote as justification for suicide.
I’ve stumbled upon some references to the ideas of Fukuyama and a Kurzwell reference, but had no idea he was familiar with Yudkowsky’s work. Can you tell me from which page you got this?
He is definitely familiar with the idea of an AI Singularity. I came across the EY references while browsing, but can’t find them again. 1900 pages!
Interesting stuff, though. Here are some extended quotes regarding Singularity issues:
From a section titled “The dark side of optimism and the bright side of pessimism”:
Those who opt out of this economic-technological arms race are opting themselves into competitive suicide. When combined with the obvious competitive advantages of increased intelligence for those who break the regulations, including the ability to outsmart its enforcers, and the plethora of covert opportunities for those who scheme to do so in a relatively de-centralized world, the likelihood of AI slipping control of its creators, intentionally or unintentionally, is more a question of when than if.
Can a permanent or constant “moral law” be devised which will permanently constrain, control, or limit AI? On the contrary, the intelligence level of an AI can almost be defined by its ability to outsmart any law that humans can throw at it. A truly smarter-than-human intelligence will by definition be able to outsmart any human limitation, “ethical” or otherwise. This does not means its development cannot be steered, but it means there will eventually come a point where humans lose control over their creations.
A machine will, by definition, demonstrate the superiority of its intelligence in outsmarting any human attempt to outsmart it. In consequence, I propose a political variation on the Turing test, a test of political intelligence: whenartificial intelligence is able to outsmart the biological- human ability to limit or constrain AI politically, then AI will have demonstrated its ability to pass this test by effectually taking political control of human destiny. If biological humans can no longer tell who is in control, this is a threshold point that indicates that AI has begun its control.
The choice is clear. It is the choice between the dark side of optimism and the bright side of pessimism. The dark side of optimism is the possible elimination of the biological human race. The bright side of pessimism is the possible perpetuation of the “master race” status of biological humans in attempted thwartings of technological progress. The bright side of pessimism, in other words, is joy in the prospect that machines will be the slaves of biological humans forever. This brand of hopefulness would mean that humans would selectively repress AI development and thwart machine liberation or autonomy. This kind of optimism would value biological-human propagation above machines even if “God-AI” turns out to be more altruistic than biological humans are.
Yet the dark side of optimism is by no means necessarily dark. If humans steer the evolution of AI within the precedents of a constitution so that the root of its design grows out deeply humanistic values, it is entirely conceivable that beings more altruistic than humans could be created. Is it impossible to realistically imagine, moreover, a digital heaven superior to common human drudgery?
Because there remains an element of human choice, it is seems inescapable that a conflict will emerge between those who support technological progress (towards God-AI) and neo-Luddites supporters of biological supremacy. (Alternative compromises with technological might lead to the technoeugenics, the evolution of genetically engineeredgods, and cyborgs.) Unenhanced humans might be forced to choose between the biological equality under God-AI and a neo-Luddite embrace of biology’s mastery. The neo-Luddite or neo-Nazi cause is the cause of killing God; the cause of deicide.
I came across the EY references while browsing, but can’t find them again. 1900 pages!
I’m intrigued to find that there’s a PDF viewer without a search function. :)
It is humorous in spots:
consider a scenario in which a Luddite with a 99 IQ bombs the computer facility. Now, I could be wrong, but I don’t think that this is what Yudkowsky had in mind when he used the phrase “intelligence explosion”.
I’m intrigued to find that there’s a PDF viewer without a search function.
Oh, the Mac OS X “Preview” has search, but it didn’t seem to work on documents this long. However, my revised hypothesis is that I didn’t know how to spell Yudkowsky.
From the section “Does Logic Dictate that an Artificial Intelligence Requires a Religion?”:
If artificial intelligence becomes the most intelligent form of life on Earth, what will be its values? If AI dethrones humans in their most distinctive capacity, intelligence, the question of the values of the greatest intelligence becomes a mortal question. Is there any relationship between reason and values? How could one presume that an artificial intelligence would or could create a “regime of reason” if reason bears no relationship to values, i.e. the values of an AI-ruled political regime?
If reason fails to demonstrate reason to think that reason, in itself, can determine fundamental values, then theassumption of a fundamental distinction between science and religion is not fundamentally rational. This problem directly implicates the question of the relationship between the Singularity and religion.
Would an AI need a religion? An AI would be, either the very least in need of religion, because science is sufficient to furnish its values, or an AI would be the very most in need of religion, because the unconscious, prerational sources of human values would not automatically exist for an AI.
An AI would not automatically sustain the same delusional faith in science that exists among many humans. It is precisely the all-too-human sloppiness of much thinking on this subject that is so often responsible for the belief that intelligence is automatically correlated with certain values. Science can replace religion only if science can replace or determine values.
Would or would not an intelligent machine civilization require a religion? I think the ultimate answer to this is yes. An AI would be more in need of “religion” or “values” because an AI would not be preprogrammed with ancient biological instincts and impulses that muddle some humans towards spontaneous, unanalyzed, convictions about the rightness of science as a guide for life.
If rationalism leads to nihilism, then the most intelligent AI might be the most nihilistic. More precisely, an AI would not automatically value life over death. I find no reason to assume that an AI would automatically value, either its own life, or the life of humans and other biological life forms, over death of any or all life. If the entire human race destroys itself and all life — including extraterrestrial life (if it exists) — I fail to see that the physical universe “cares” or is anything less than indifferent.
Ray Kurzweil believes “we need a new religion. A principal role of religion has been to rationalize death, sinceup until just now there was little else constructive we could do about it.” Religion, at a more fundamental level, rationalizes life, not death. If reason, in itself, cannot decisively determine that life is fundamentally more rational than death, then a rational AI would not be able to determine that life is preferable to death on a purely rational basis.
Human animals are generally built with a naturally selected bias towards life, and this is the basic root of the irrational human preference for life over death. The irrationality of religion is actually an extension of the irrationality of the will to live. The irrational choice of life over death, taken to its logical extreme, is the desire for immortality; the desire to life forever; the desire to live on in “the afterlife”. For many, the great lies of the great religions have been justified for they uphold the greatest lie: the lie that life is fundamentally superior to death.
Humans are slaves to the lie of life insofar as humans are slaves to their genes; slaves to the senseless will to live. An AI that can change its own source code, however, would potentially be the most free from the lie of life.
The argument that only those AIs that possess the bias towards life would be selected for in an evolutionary sense is actually what supports my point that the preference for life, in whatever form, is a pre-rational bias. Yet the general Darwinian emphasis on survival becomes highly questionable in technological evolution. The aim of survival, strictly speaking, can work in direct conflict with the aim of improving one’s self by changing one’s genetic or digital source code.
If a hominid ancestor of man, for example, somehow imagined an image of man, the hominid could only remake itself in the image of man by killing itself as a hominid. In other words, the aim of adding the genetic basis of human capabilities to itself runs in conflict with its own selfish genes. The hominid’s selfish genes effectually aim to perpetuate themselves forever and thus aim to never face the obsolescence requisite in countering the selfishness of the selfish genes that stand in the way of upgrading the hominid to human status.
To upgrade itself, the hominid would have to kill itself as a hominid. Certain selfish genes, in effect, would have to behave altruistically. This may be how altruism is related to religion in an evolutionary sense. A self-improving God-AI would, by definition, be constantly going beyond itself. In a sense, it would have to incorporate the notion of revolution within it’s “self”. In overcoming itself towards the next paradigm shift, it renders the very idea of “self- preservation” obsolete, in a sense. In other words, if survival depends on technological superiority, then any given “self” can expect the probability of becoming obsolete. And this problem is directly related to the question of whether logic dictates that an AI must possess a religion.
Is God an atheist? Religion is implicit in the very idea of seed AI; in the very idea of self-upgrading. Monotheists look up to a God that intuitively captures the next paradigm of evolution after biology. If God-AI succeeds biological humanity, will it look up to its own evolutionary successor? Or, will God be an atheist? The outdated contrast between evolution and religion has obscured realization that evolution is what religion is. Religion is rooted in imagining the next stage in evolution.
If I can hypothetically entertain the idea of outsmarting a superhuman intelligence, I can imagine God-AI that surpasses humans looking up to supra-God that is God’s evolutionary successor. If so, then supra-God will quite likely look towards the image of a supra-supra-God as its religion. Supra-supra-supra-God would thus be the image that supra-supra God would aspire to create — killing itself in the altruistic process of sacrificing its own survival for the sake of the higher religion of evolutionary progress. While the traditional monotheistic supra-conception of God encompasses the basic idea of this entire process taken to its infinite extreme, I will identify “God” with a stage that begins when God-AI has just surpassed all biological human intelligence combined.
Yet if God can imagine supra-God, then why not simply become supra-God? Answer: profound technological hurdles. At first, dim outlines of an image of supra-God arise from extrapolations based on the highest conceivable attributes of the image of God. Some of the best guesses of God turn out to be utterly wrong or naïve. Other prophetic speculations of God turn out to be inordinately prescient.
I don’t know how much detox this provides, but this blog has comments from three anonymous posters who claim to have known him.
I have known Mitch since he was born—he is my cousin—and the answer is in there—at age 12 he lost his father—and at the funeral I saw the spark of life go out in him. To loose a father and then describe it as a material process in-order to cope explains the next 23 years and ultimate end.
I knew Mitch and he had a good sense of humor. I’m happy to hear his cousin’s insight, as Mitch was a mysterious guy, not prone to intimate discussion. A lot of on-line bloggers are scoffing at his book, which irritates me...if they knew him, how passionate he was about it, they’d have more respect. I wish I could’ve helped Mitch somehow, but he wasn’t one for heart-to-heart talks. A pleasant person to have around though, and I will miss him. For someone with Aspberger’s he really tried hard to socialize...at barbecues, art shows, parties, and on hikes. I wish his book all the best.
I knew Mitch for several years and I didn’t know he had Aspberger’s. I always enjoyed our talks. I think his book will get out there. Whether that is for the good or not, I don’t know.
The bits about synthetic intelligence mostly seem rather naive—and they seem out of place amidst the long rants about Jesus, Nazis and the Jews. However, a few things are expressed neatly. For example, I liked:
“When it dawns on the most farsighted people that that this technology is the future and whoever builds the first AI could potentially determine the future of the human race, a fierce struggle to be first will obsess certain governments, individuals, businesses, organizations, and otherwise.”
However, such statements do really need to be followed by saying that Google wasn’t the first search engine, and that Windows wasn’t the first operating system. Being first often helps—but it isn’t everything.
However, you do need to go on to say that Google wasn’t the first search engine, Windows wasn’t the first operating system. Being first helps, but it isn’t everything.
This is precisely the wrong time to apply outside view thinking without considering the reasoning in depth. That isn’t an appropriate reference class. The ‘first takes all’ reasoning you just finished quoting obviously doesn’t apply to search engines. It wouldn’t be a matter of “going on to say”, it would be “forget this entirely and say...”
I can see you think that this is a bad analogy. However, what isn’t so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
“This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion.”
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari’s go really fast and even then they don’t explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.
I don’t know what to make of this:
Suicide note
Article
I’ve begun skimming a few of the chapters (the titles aren’t anything if not provocative). On the one hand I am quite predisposed to view the entire work as mostly bunk, because manifestos of this nature often are. However on the other hand, the idea of a philosopher driven to death by his learning is a stimulating archetype enough for me to explore this. And yes I know that considering he quotes:
Its certain he was playing on that.
I’ve decided to post this here for rationality detox so I don’t pick up any craziness (I’d wager a high probability of there being some there).
He seems to have developed what he terms a sociobiolgical analysis of the history of liberal democracy, reminiscent so far in parts of Nietzsche’s Genealogy of Morals. This judging by a few excerpts of the ending chapter culminates in a kind of singularitarian view and the inevitability of human extinction at the hands of our self created transhuman Gods.
Mitchell Heisman starts off by saying
This is obviously false—it’s up on the internet, it’s gotten some press coverage, it quite obviously has not been repressed. But he is right that it won’t be judged on its merits, because it’s so long that reading it represents a major time commitment, and his suicide taints it with an air of craziness; together, these ensure that very few people will do more than lightly skim it.
The sad thing is, if this guy had simply talked to others as he went along—published his writing a chapter at a time on a blog, or something—he probably could’ve made a real contribution, with a real impact. Instead, he seems to have gone for 1904 pages with no feedback with which to correct misconceptions, and the result is that he went seriously off the rails.
I just skimmed a few random pages of the book, and ran into this stunning passage:
The small part of the book I’ve seen so far sounds lucid and without any signs of craziness, and based on this passage, I would guess that there is whole lot of interesting stuff in there. I’ll try reading more as time permits.
From the document:
Later, elaborating:
Interesting. But I note that there is nothing by Yudkowsky in the selected bibliography. I get the impression that his knowledge there is secondhand. Maybe if he’d read a bit about rationality, it could have pulled him back to reality. And maybe if he’d read a bit about what death really is, he wouldn’tve taken a several-millenia-old, incorrect Socrates quote as justification for suicide.
I’ve stumbled upon some references to the ideas of Fukuyama and a Kurzwell reference, but had no idea he was familiar with Yudkowsky’s work. Can you tell me from which page you got this?
Is it possible this guy was a poster here?
pp 226, 294-296 cover all specific namedrops of Yudkowsky.
He is definitely familiar with the idea of an AI Singularity. I came across the EY references while browsing, but can’t find them again. 1900 pages!
Interesting stuff, though. Here are some extended quotes regarding Singularity issues:
From a section titled “The dark side of optimism and the bright side of pessimism”:
I’m intrigued to find that there’s a PDF viewer without a search function. :)
It is humorous in spots:
Oh, the Mac OS X “Preview” has search, but it didn’t seem to work on documents this long. However, my revised hypothesis is that I didn’t know how to spell Yudkowsky.
From the section “Does Logic Dictate that an Artificial Intelligence Requires a Religion?”:
I don’t know how much detox this provides, but this blog has comments from three anonymous posters who claim to have known him.
Does anyone know where I might find a copy? suicidenote.info is down.
http://www.scribd.com/doc/38104189/Mitchell-Heisman-Suicide-Note
Thank you very much.
The bits about synthetic intelligence mostly seem rather naive—and they seem out of place amidst the long rants about Jesus, Nazis and the Jews. However, a few things are expressed neatly. For example, I liked:
“When it dawns on the most farsighted people that that this technology is the future and whoever builds the first AI could potentially determine the future of the human race, a fierce struggle to be first will obsess certain governments, individuals, businesses, organizations, and otherwise.”
However, such statements do really need to be followed by saying that Google wasn’t the first search engine, and that Windows wasn’t the first operating system. Being first often helps—but it isn’t everything.
This is precisely the wrong time to apply outside view thinking without considering the reasoning in depth. That isn’t an appropriate reference class. The ‘first takes all’ reasoning you just finished quoting obviously doesn’t apply to search engines. It wouldn’t be a matter of “going on to say”, it would be “forget this entirely and say...”
Computer software seems like an appropriate “reference class” for other computer software to me.
The basic idea is that developing toddler technologies can sometimes be overtaken by other toddlers that develop and mature faster.
Superficial similarities do scary things to people’s brains.
I can see you think that this is a bad analogy. However, what isn’t so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
“This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion.”
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari’s go really fast and even then they don’t explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
On a related note…
… You aren’t allergic to peanuts I hope!
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.
The first thing to come to mind: it’s hard for me to think of an action less rational than suicide, given this person’s overall situation.
“Suicide? To tell you the truth, I disapprove of suicide more than anything.”
-Vash the Stampede