[LINK] NYT Article about Existential Risk from AI
Author: Huw Price (Bertrand Russell Professor of Philosophy at Cambridge)
The article is mainly about the Centre for the Study of Existential Risk and the author’s speculation about AI (and his association with Jaan Tallinn). Nothing made me really stand up and think “This is something I’ve never heard on Less Wrong”, but it is interesting to see Existential risk and AI getting more mainstream attention, and the author reproduces tabooing in his willful avoidance of attempting to define the term “intelligence”.
The comments all miss the point or reproduce cached thoughts with frustrating predictability. I think I find them to be so frustrating because these do not seem to be unintelligent people (by the standards of the internet at least; their comments have good grammar and vocabulary), but they are not really processing.
- 30 Jan 2013 2:29 UTC; 15 points) 's comment on Huy Price (Cambridge philosopher) writes about existential risk for NYT by (
PSA: Want to have a positive impact, quickly? Go to the NYT page linked in the OP and leave a comment.
EDIT: More and more nonsense comments, highly upvoted too, but not one from a LW’er. This is how public perception is shaped, and the fruit are so low hanging they should be called potatos.
Edit: Done.
+1
I didn’t feel like entering the morass of the comment debate, but I added a simple informational comment, mentioning FHI and SI.
This is one of the only mainstream articles I have ever seen that actually “gets the point” about just how AI is dangerous. The author of this one takes AI quite seriously, and understands that an AI can be dangerous even if it is not malicious. This puts this article miles ahead of basically every other similar piece.
The thing about this article that scores the most points with me, though, is the lack of mention of the various works of fiction that try to talk about AI. All to often, the author of this kind of article starts talking about how robots trying to kill us is just like Terminator, or starts talking about how Asimov’s three laws of robotics are the kind of thing needed to deal with AI. But the author of this article very wisely avoided the pitfall of generalizing from fictional evidence, so thumbs up from me.
Huw Price is one of my favorite contemporary philosophers. Here is his list of publications, which has interesting papers on decision theory, causation, the arrow of time, the interpretation of quantum mechanics, naturalism, and truth.
I second the recommendation. His work on the arrow of time is classic, of course, but I’d particularly encourage people to read his stuff on naturalism and truth, especially the papers collected in his book Naturalism Without Mirrors (most of which are available for download on his website, I think). A very useful (and, in my opinion, largely correct) counterpoint to the LW orthodoxy on these subjects.
For a quick introduction to his approach, try his three Descartes lectures, available here.
Thanks for that.
I read NWM as well as a number of his other papers earlier this year, and while I enjoyed them a great deal I still struggle to understand the basic motivations for and plausibility/coherence of anti-representationalism/global expressionism. Why not rest content with commonsensical expressionism within restricted domains (culture/psychology/morals)? Total metaphysical and scientific expressionism make little sense to me; it seems obvious that there must be some underlying medium that governs our “discursive practices”. I haven’t read FFT (waiting on the 2nd ed) but I don’t see a semantic/truth theory trumping my confidence in science as a method of representational success.
Would appreciate pointers, thoughts or conversation.
The comments are somewhat disappointing: not very charitable readings of the article and no real attempt to speak to the thrust of the argument.
Also, the oft repeated phrase—w/r/t the risks of technology—that we face “losing our humanity” desperately needs to be taboo-ed.
My draft attempt at a comment. Please suggest edits before I submit it.:
The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don’t believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).
Here are two websites that go into much greater detail about the problem:
AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/
Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/
In a word, IARPA. In a sentence:
They are large and well-funded.
Hmm, looks cool.
Whatever the author’s motivations, that definition is unnecessary in the present context. As Chalmers noted (sect. 3), the key premises in the argument for the singularity can be formulated without relying on the concept of intelligence. What is needed is instead the notion of a self-amplifying capacity, coupled with the claims that (1) we can create systems that exhibit that capacity to a greater degree than we do and that (2) increases in that capacity will be correlated with changes in some property or properties that we care about.
If people would just read Intelligence Explosion: Evidence and Import, these debates would get a lot further.
Different audience, different language. I’m just impressed that a NY Op-Ed actually contained these sentences:
That’s a very gentile nudge toward a radical shift in how intelligence is generally thought of. Simple analogies and simple terminology (except for ‘optimization problem’, which I think could be understood from the context) for people reading the paper over a bowl of cereal.
Fair, I liked the article, too.
I was responding to the last paragraph of the OP, not the first.
It’s a pretty accessible article. I’m not fully informed on the AI debate, but does anyone know if there are good papers discussing:
It seems we have a new avatar for Clippy; the automated IKEA furniture factory
Huw’s idea that intelligence will escape the constraints of biology is probably wrong. The reason is that machine intelligence will be biological. Siri, Google, Watson etc are firmly biological. Non-biologial things are rocks and stars—things to which the idea of cumulative adaptive evolution does not apply.
That is not how people normally use the term “biological”, and in particular it’s not how Huw used it.
“Biology” is a standard dictionary term, which refers to the study of life. The claim that in the future the most advanced creatures won’t be alive is pretty outrageous. Pretty outrageously wrong, I would say. Not understanding this issue is a common confusion—which is why it needs pointing out from time to time.
(Found a bug in the implementation of the karma penalty: if an ancestor comment goes to −4, responses to any of its descendants will incur the karma penalty. So far, so correct. However, if an ancestor comment goes back from −4 to −3, and is thus eligible for answers without a karma penalty, that change will not propagate down the tree. I.e., I can make this comment, having upvoted the parent from −4 to −3, without a karma penalty. If I replied to Baughn’s +6 comment, however, the outdated karma penalty would still erroneously apply.
Why this comment? Explanation and proof in one: “The answer—by demonstration—would take care of that, too.”)