Interesting because its a ‘popular culture’ look at the basics of AI we might consider fairly basic. Might be a bit sensationalist, the tagline is “AI scientists want to make gods. Should that worry us? -
Singularitarians believe artificial intelligence will be humanity’s saviour. But they also assume AI entities will be benevolent”
Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.
Reading this article and the comments section really drove home how important rationality skills are when thinking about the future.
If you google Singularitarian, the obsolete singularitarian principles document on yudowsky.net is the second link. It would be good if the obsolete notice steered the reader to more current sources including LessWrong.
The Guardian (prominent UK newspaper) on friendly (or otherwise) artificial general intelligence.
Interesting because its a ‘popular culture’ look at the basics of AI we might consider fairly basic. Might be a bit sensationalist, the tagline is “AI scientists want to make gods. Should that worry us? - Singularitarians believe artificial intelligence will be humanity’s saviour. But they also assume AI entities will be benevolent”
What a disheartening article. The whole thing can be summed up with a quote from Three Major Singularity Schools:
Reading this article and the comments section really drove home how important rationality skills are when thinking about the future.
Agreed. I hope you (and other LW people) contribute to the discussion to try and correct some of these misconceptions.
It is an important reminder of how strange and scary these ideas seem at first glance and the inferential distances involved.
How would you estimate the percentage of LWers in the Singularitarian movement? Maybe most Singularitarians really are that clueless.
If you google Singularitarian, the obsolete singularitarian principles document on yudowsky.net is the second link. It would be good if the obsolete notice steered the reader to more current sources including LessWrong.