Time Magazine has an article about the Singularity...
...and it is surprisingly positive.
The money quote, IMO, given how mainstream a publication this is:
“The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth. There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.”
This was almost too positive. There are worthwhile objections to Kurzweil’s version of the Singularity, but this article mainly pointed out some of the sillier objections (e.g. maybe a computer could never be Truly Intelligent, it sounds kinda like the rapture, etc.). Still, it was good to see a mention of SIAI and Friendly AI in a serious, positive article in a mainstream publication. (Too bad about the cover, though I guess they couldn’t pass up the prophecy-of-immortality angle.)
Because that’s what’s on their readers’ minds.
Wow! I am somewhat shocked that the Singularity has gone mainstream enough to make the cover of Times, and get a largely positive treatment at that.
It’s the kind of thing which makes me think the Flynn effect is making a perceptible difference.
Very good overall. The biggest problem is “Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.” The prediction that is both least likely and most shocking is introduced without any explanation of how it could be done or how likely Kurzweil thinks it is.
The writer seems unaware of the right-hand turn.
Took me awhile to find anything on this. Here is an article from 2005.
http://www.gotw.ca/publications/concurrency-ddj.htm
How serious of a problem is this for Kurzweil’s predictions? It seems like it would be fairly important to account for this (and of course add in a couple more decades for planning fallacy). But how much faster do computers need to get to upload a human brain or run an AI architecture?
Moore’s law still holds. But you need to be able to write highly parallelizable code.
It is not clear that anyone outside of the computer world is aware of this. The chip makers PR departments are trying their best to hide this fact. (“It’s a Core Two Duo Pro X2400! Don’t ask how fast it is!”)
Time interviewed Ray Kurzweil a while ago for 10 Questions, although this is their first actual article that I’ve read on the subject.
I’m pretty surprised that they mentioned ‘Non-Kurzweilians’ and then failed to tell how their views differed. The Kurzweilian singularity does seem to be the one that would be the most interesting to the general public, however.
Here is a video on the topic by the Time magazine: When Robots Attack! Should We Fear a Singularity?
I’m glad to see this too, but:
I think the writer said “reward” when they meant “deserve”, athough I suppose its proponents would claim “reward” works too :-P