I agree that this article isn’t very good. It seems to do the standard problem of combining a lot of different ideas about what the Singularity would entail. It emphasizes Kurzweil way too much, and includes Kurzweil’s fairly dubious ideas about nutrition and health. The article also uses Andrew Orlowski as a serious critic of the Singularity making unsubstantiated claims about how the Singularity will only help the rich. Given that Orlowski’s entire approach is to criticize anything remotely new or weird-seeming, I’m disappointed that the NYT would really use him as a serious critic in this context. The article strongly reinforces the perception that the Singularity is just a geek-religious thing. Overall, not well done at all.
I’m starting to think SIAI might have to jettison the “singularity” terminology (for the intelligence explosion thesis) if it’s going to stand on its own. It’s a cool word, and it would be a shame to lose it, but it’s become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit:Look at this Facebook group. This sort of thing is just embarrassing to be associated with. “If you are feeling brave, you can approach a stranger in the street and speak your message!” Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn’t do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
I’m not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion. This section in particular looked like a caricature:
To raise awareness of the Singularity, which is expected to occur no later than the year 2045, we must reach out to everyone on the 1st day of every month.
At 20:45 hours (8:45pm) on the 1st day of each month we will send SINGULARITY MESSAGES to friends or strangers.
Example message:
“Nanobot revolution, AI aware, technological utopia: Singularity2045.”
The certainty for 2045 is the most glaring aspect of this aside from the pseudo-missionary aspect. Also note that some of the people associated with this group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover, there’s a reason that missionaries sound like this: They have a very high confidence in their correctness. If one had a similarly high confidence in the probability of a Singularity event, and you thought that that event was more likely to occur safely if more people were aware of it, and was more likely to occur soon if more people were aware of it, and buy into something like the galactic colonization argument, and you believe that sending messages like this has a high chance of getting people to be aware and take you seriously then this is a reasonable course of action. Now, that’s a lot of premises, some of which have likelyhoods others which have very low ones. Obviously there’s a very low probability that sending out these sorts of messages is at all a net benefit. Indeed, I have to wonder if there’s any deliberate mimicry of how religious groups send out messages or whether successfully reproducing memes naturally hit on a small set of methods of reproduction (but if that were the case I think they’d be more likely to hit an actually useful method of reproduction). And in fairness, they may just be using a general model for how one goes about raising awareness for a cause and how it matters. For some causes, simple, frequent appeals to emotion are likely an effective method (for example, making people aware of how common sexual assault is on college campuses, short messages that shock probably do a better job than lots of fairly dreary statistics). So then the primary mistake is just using the wrong model of how to communicate to people.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I’m assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn’t seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage—I’m imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I’m not sure where you’d start to prevent biotech disasters.
It’s better than mainstream Singularity articles in the past, IMO; unfortunately, Kurzweil is seen as an authority, but at least it’s written with some respect for the idea.
It does seem to be about a lot of different things, some of which are just synonymous with scientific progress (I don’t think it’s any revelation that synthetic biology is going to become more sophisticated.)
I’m curious: Was the SIAI contacted for that article? I haven’t had time to read it all, but a word-search for “Singularity Institute” and “Yudkowsky” turned up nothing.
In the Singularity Movement, Humans Are So Yesterday (long Singularity article in this Sunday’s NY Times; it isn’t very good)
http://news.ycombinator.com/item?id=1426386
I agree that this article isn’t very good. It seems to do the standard problem of combining a lot of different ideas about what the Singularity would entail. It emphasizes Kurzweil way too much, and includes Kurzweil’s fairly dubious ideas about nutrition and health. The article also uses Andrew Orlowski as a serious critic of the Singularity making unsubstantiated claims about how the Singularity will only help the rich. Given that Orlowski’s entire approach is to criticize anything remotely new or weird-seeming, I’m disappointed that the NYT would really use him as a serious critic in this context. The article strongly reinforces the perception that the Singularity is just a geek-religious thing. Overall, not well done at all.
I’m starting to think SIAI might have to jettison the “singularity” terminology (for the intelligence explosion thesis) if it’s going to stand on its own. It’s a cool word, and it would be a shame to lose it, but it’s become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit: Look at this Facebook group. This sort of thing is just embarrassing to be associated with. “If you are feeling brave, you can approach a stranger in the street and speak your message!” Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn’t do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
I’m not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion. This section in particular looked like a caricature:
The certainty for 2045 is the most glaring aspect of this aside from the pseudo-missionary aspect. Also note that some of the people associated with this group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover, there’s a reason that missionaries sound like this: They have a very high confidence in their correctness. If one had a similarly high confidence in the probability of a Singularity event, and you thought that that event was more likely to occur safely if more people were aware of it, and was more likely to occur soon if more people were aware of it, and buy into something like the galactic colonization argument, and you believe that sending messages like this has a high chance of getting people to be aware and take you seriously then this is a reasonable course of action. Now, that’s a lot of premises, some of which have likelyhoods others which have very low ones. Obviously there’s a very low probability that sending out these sorts of messages is at all a net benefit. Indeed, I have to wonder if there’s any deliberate mimicry of how religious groups send out messages or whether successfully reproducing memes naturally hit on a small set of methods of reproduction (but if that were the case I think they’d be more likely to hit an actually useful method of reproduction). And in fairness, they may just be using a general model for how one goes about raising awareness for a cause and how it matters. For some causes, simple, frequent appeals to emotion are likely an effective method (for example, making people aware of how common sexual assault is on college campuses, short messages that shock probably do a better job than lots of fairly dreary statistics). So then the primary mistake is just using the wrong model of how to communicate to people.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I’m assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn’t seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage—I’m imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I’m not sure where you’d start to prevent biotech disasters.
That’s one hell of a “relatively” you’ve got there!
Agreed, but… they’d even have to change their own name!
It’s better than mainstream Singularity articles in the past, IMO; unfortunately, Kurzweil is seen as an authority, but at least it’s written with some respect for the idea.
It does seem to be about a lot of different things, some of which are just synonymous with scientific progress (I don’t think it’s any revelation that synthetic biology is going to become more sophisticated.)
I’m curious: Was the SIAI contacted for that article? I haven’t had time to read it all, but a word-search for “Singularity Institute” and “Yudkowsky” turned up nothing.
I hear Michael Anissimov was not contacted, and he’s probably the one they’d have the press talk to.