AGI Quotes
Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts on LW/OB.
Vernor Vinge
Edsger Dijkstra (1984)
I don’t understand this.
It is seemingly easy to get stuck in arguments over whether or not machines can “actually” think.
It is sufficient to assess the effects or outcomes of the phenomenon in question.
By sidestepping the question of what, exactly, it means to “think”,
we can avoid arguing over definitions, yet lose nothing of our ability to model the world.
Does a submarine swim? The purpose of swimming is to propel oneself through the water. A nuclear powered submarine can propel itself through the oceans at full speed for months at a time. It achieves the purpose of swimming, and does so rather better than a fish, or a human.
If the purpose of thinking is isomorphic to:
Model the world in order to formulate plans for executing actions which implement goals.
Then, if a machine can achieve the above we can say it achieves the purpose of thinking,
akin to how a submarine successfully achieves the purpose of swimming.
Discussion of whether the machine really thinks is now superfluous.
It is a similar idea as that proposed by Turing. If you have submarines, and they move through the water and do exactly what you want them to do, then it is rather pointless to ask if what they’re doing is “really swimming”. And the arguments on both sides of the “swimming” dispute will make reference to fish.
If the submarine only “swims” when a human tells it to, I think this is the sense intended by saying a submarine doesn’t really swim any more than a scuba tank breathes under water. People “swim” with the aid of a submarine.
Consider in spongebob when plankton builds a fake “Mr Crabs.” The machine is superb, at least as functional as the real Mr Crabs and stronger and more durable to boot. But without a plankton up in the control room running the thing, it does nothing.
Implicit in the ideas of those who think machines may take over is that the increase in capabilities of machines will in some sense naturally, or perhaps even accidentally, include the creation of machine volition, machine will, a machine version of the driver of the machine.
This quoter apparently doubts this assumption, at least about current machines. As long as every powerful machine we build needs a human driver lest it sit there with its metaphorical screen saver on waiting for a volitional agent to command it, then all machines no matter how powerful are just tools.
I don’t think Kurzweil necessarily thinks machines will get volition in his version of the singularity. Kurzweil is much more oriented towards enhanced humans, essentially or eventually a human which can access the solution to provlems that require a lot of intelligence, but who is still supplying all the volition in the system.
Around here, on the other hand, I think it is essentially assumed that machine intelligence will be independent and with its own volition, which humans will have a hand in constraining by design.
The maker of the quote questions whether volition, will, primal drive, will arise naturally as part of the progression
I disagree.
What he’s saying is: submarines traverse water, so it’s irrelevant whether we call what they do “swimming”. Likewise, if a machine can do the things that a thinking being can do, then it’s irrelevant whether it’s “actually” “thinking”.
He refers to this as a settled question in the origin of the quote. Moreover, he capitalizes the terms in question, indicating he perceives the concept as an incorrect reification.
“Sorry Arthur, but I’d guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon.”—Dan Clemmensen, SL4
Source: http://www.sl4.org/archive/0203/3081.html
This is one of the earliest quotes I read that made it click that nothing I could do with my life would have greater impact than pursuing superintelligence.
Michael Anissimov
… I wonder how “alone” I am in the notion that AGI causing human extinction may not be a net negative, in that so long as it is a sentient product of human endeavors it is essentially a “continuation” of humanity.
Two problems: An obnoxious optimizing process isn’t necessarily sentient. And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?
If it helps ask yourself how you feel about a human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of “In the Prisoners’ Dilemma, Humanity Defects!” That sounds pretty bad doesn’t it? Now note that the AGI expansion is probably worse than that.
Hence my caveat.
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
Not especially, no.
It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability. As a result, and to help prevent problems, everyone but a tiny elite is denied any form of life-extension technology. Even the elite has their lifespan only extended to about a 130 to prevent anyone from accumulating too much power and threatening the standard oligarchy. Similarly, new ideas for businesses are ruthlessly suppressed. Most people will have less mobility in this setting than an American living today. Planets will be ruthlessly terraformed and then have colonists forcively shipped their to help start the new groups. Most people have the equivalent of reality TV shows and the hope of the winning the lottery to entertain themselves. Most of the population is so ignorant that they don’t even realize that humans originally came from a single planet.
If this isn’t clear, I’m trying to make this about as dystopian as I plausibly can. If I haven’t succeeded at that, please imagine what you would think of as a terrible dystopia and apply that. If really necessary, imagine some puppy and kitten torturing too.
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do—or perhaps even more greatly so, as a result of being more generally capable than we are.
I’m already assuming that the AGI would not do things we want. Such as letting us continue living. But again; if it is sentient, and capable of making decisions, learning, finding values and establishing goals for itself… even if it also turns the entire cosmos into paperclips while doing so—where’s the net negative utility?
I value achieving heights of intellect, ultimately. Lower-level goals are negotiable when you get down to it.
And eats babies.
You’re willfully trying to make this hypothetical horrible and then expect me to find it informationally significant that a bad thing is bad. This is meaningless discourse; it reveals nothing.
If it isn’t clear that by willfully painting a dystopia you are denuding your position of any meaningfulness—it’s a non-argument—then I don’t know what will be.
You haven’t provided an argument about why what you initially described would be dystopic. You simply assumed that humanity spreading itself at the cost of all other sentient beings would be dystopic.
That’s simply a bald assertion, sir.
Human values change in part because we aren’t optimizers in any substantial sense. We’re giant mechas for moving around DNA (after the RNA’s replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predators and other humans. Only then something went wrong and the mechas got too smart from runaway sexual selection. This narrative may be slightly wrong, but something close to it is correct. More to the point, for much of human history, having values that were that different from peers was a good way to not have reproductive success. Humans were selected for having incoherent, inconsistent, fluid value systems.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
Regarding the empire, I may need to apologize; I think I have more negative connotations to the word “empire” than were stated explicitly in my remark and that they are not shared. Here’s possibly a slightly different analogy that may help: If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
Not Logos, but:
The Imperium in a 40K like universe and the UFP in a Star Trek like universe. Switching them would be disastrous in either case. Not that either is optimal even for its own environment, and the actual universe is extremely unlikely to resemble either fiction. I agree that, given an unlikely future where humans still in control of their policies expand into space and encounter aliens, being able to afford being nice to them is better not being able to, and actually being nice to them is better than not if one can afford to.
I was assuming the latter. As to the former, again: hence my caveat. I don’t much care what the possibility of AGI mindspace is, I’ve already arbitrarily limited the kinds I’m talking about to a very narrow window.
So objecting to my valuative statement regarding that narrow window with the statement, “But there’s no reason to think it would be in that window!”—just shows that you’re lacking reading skills, to be quite frank.
I don’t much care what the range of possible values is for f(x) for x=0..10000000, when I’ve already asked the question what is f(10)? If it’s a sentient entity that is recursively intelligent, then at some point it alone would become more “cognizant” than the entire human race put together.
If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?
Samuel Butler (1872)
Eliezer Yudkowsky (2008)
EY changed it in the published version to:
“The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else.”
My favorite paraphrase is my own:
“The AI does not hate you, nor does it love you, but you are made of atoms it can use for something else.”
I like the rhythm of this one best. It can be sung.
Let us walk together to the kirk, and all together pray, while each to our great AI bends — old men, and babes, and loving friends, and youths and maidens gay! :D
Whether the AI loves -- or hates, you cannot fathom, but plans it has indeed for your atoms.
Arthur C. Clarke (1968)
“There are lots of people who think that if they can just get enough of something, a mind will magically emerge. Facts, simulated neurons, GA trials, proposition evaluations/second, raw CPU power, whatever. It’s an impressively idiotic combination of mental laziness and wishful thinking.”—Michael Wilson
Vernor Vinge
… in original.
Alan Turing (1951)
Theodore Roosevelt
I.J. Good (1970)
-- same paper
Norbert Weiner (1949)
“In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond “audacity” as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.”—Eliezer Yudkowsky
Stephen Hawking
Garet Garrett (1926)
AAAI PRESIDENTIAL PANEL ON LONG-TERM AI FUTURES: 2008-2009 STUDY, Interim Report from the Panel Chairs
Good (1965)
The use of “unquestionably” in this quote has always irked me a bit, despite the fact that I find the general concept reasonable.
Samuel Butler (1863)
Konstantin Kakaes
George Dyson (1998)
Now we just need machines on our side and we’ll have a cute little love-triangle.
So then, even when we have an FAI, all three parties will be unhappy?
T.M. Georges (2004)
Where did he say this? A search turns up only this page. Thanks!
In Digital Soul, if I recall correctly.
Keith Stanovich
Why we’re doomed reason #692...
Here is Hugo de Garis, in the opening of The Artilect War:
I.J. Good (1970)
-Paul F. Christiano
Samuel Butler (1880)
Bill Gates
I can only assume he wasn’t actually talking about an AGI-level breakthrough. I don’t think I’d expect him to underestimate the impact or value of AGI that severely.
This is the man who thought Windows Me was a good idea.
Actually “10 Microsofts” may be an overestimate :-) Microsoft’s success in its mission to make personal computers ubiquitous seems important to making AGI possible.
I am the psychotherapist. Please, describe your problems. Each time you are finished talking, type RET twice.
I feel so tired
Why do you say?
I fall asleep all the time
Is it because of your plans that you say?
Yes, I work too hard
Maybe your life has something to do with this.
Indeed!
Can you elaborate on that?
Well, I work so can improve my life
What makes you believe?
I read it somewhere
Is it because of the people you hang around with that you say?
I bet it is because of that
I would appreciate it if you would continue.
[...]
my secretary will send you a bill.
-- emacs, doctor mode, the built-in psychotherapist
Margaret Boden, Artificial Intelligence and Natural Man, p. 463
Good (1959)
Minsky (1984)
Wozniak declared to his audience that “we’re already creating the superior beings, I think we lost the battle to the machines long ago.”
http://au.ibtimes.com/articles/157802/20110606/steve-wozniak-humans-will-soon-surrender-superiority-to-machines.htm
Kevin Warwick (1998)
Julius Lukasiewicz (1974)
You a have a lot of quotes to share.
James Barrat
-Yvain
Cade (1966), p. 225
Cade (1966), p. 220
Page 223 includes this drawing of self-reproducing machines.
Cade, Other Worlds Than Ours (1966), pp. 214-219
Cade, Other Worlds Than Ours (1966), pp. 213-214
Samuel Butler, 1872
(My own answer to Butler’s question is “No” for the reason Moravec gave in 1988.)
Kevin Warwick, March of the Machines (1997)
Ray Kurzweil, The Age of Spiritual Machines, p. 3
-Paul Almond
Waldrop (1987)
Al-Rodhan (2011), pp. 242-243, notices the stable self-modification problem.
From Michie (1982):
Woody Bledsoe, quoted in Machines Who Think.
Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, p. 341
I.J. Good (1998)
Peter Kugel
James Barrat
Havelock Ellis, 1922
Steve Torrance (2012)
Cade (1966), p. 228
Shorter I.J. Good intelligence explosion quote:
Source.
Albert Einstein
Looking more closely, this much-duplicated “quote” seems to be a paraphrase of something he wrote in a letter to Heinrich Zaggler in the context of the first world war: “Our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal.”
I do think about the AGI problem in much this way, though. E.g. in Just Babies, Paul Bloom wrote:
I think our current civilization i like a two-year old. The reason we haven’t destroyed ourselves yet, but rather just bit some fingers and ruined some carpets, is because we didn’t have any civilization-lethal weapons. We’ve had nuclear weapons for a few decades now and not blown ourselves up yet but there were some close calls. In the latter half of the 21st century we’ll acquire some additional means of destroying our civilization. Will we have grown up by then? I doubt it. Civilizational maturity progresses more slowly than technological power.
″… This is the subject matter of Fun Theory, which ultimately determines the Fate of the Universe. For if all goes well, the question “What is fun?” shall determine the shape and pattern of a billion galaxies.”—Eliezer Yudkowsky