Less Wrong link exchange
We’ve had similar threads before, but not for a while so I thought I’d make one.
Basic rules, share links that are relevant to Less Wrong areas of interest, but aren’t worthy of their own post. Please include a brief description with the link. (My own contributions are below.)
Nate Silver on Herman Cain and the hubris of experts. Not really about politics in the mind-killing sense, but about uncertainty and overconfidence in political predictions. Both peter-hurford and me quoted from it on the monthly quotes thread.
It’s a solid article just from its political science analysis; I obviously also recommend it.
So, first post in LW!
New TED Talks video about the role of Bayesian inference in controlling human movement: http://www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains.html
Welcome! You should officially say hello; it’s free karma.
thx, just said a nice hello!
BBC News: Signs of ageing halted in the lab
http://ai-class.syavash.com/naivebayes Someone in Norvig and Thrun’s AI class made a bayesian classifier with laplacian smoothing. It shows you the complete equations generated and lets you set the text to classify, text in each training set, and smoothing parameter; so it’s a great tool for direct instruction.
Cracked (humour website) on logical fallacies and cognitive biases.
Subheadings:
We’re Not Programmed to Seek “Truth,” We’re Programmed to “Win”
Our Brains Don’t Understand Probability
We Think Everyone’s Out to Get Us Are your enemies innately Evil?
We’re Hard-Wired to Have a Double Standard (Fundamental attribution error)
Facts Don’t Change Our Minds (We change our minds less often than we think)
Most of which should be familiar but good example of presenting these ideas in a readable style. Might be a useful resource to point people to who would be put off by the style here.
This actually got its own post a few days ago.
The Guardian (prominent UK newspaper) on friendly (or otherwise) artificial general intelligence.
Interesting because its a ‘popular culture’ look at the basics of AI we might consider fairly basic. Might be a bit sensationalist, the tagline is “AI scientists want to make gods. Should that worry us? - Singularitarians believe artificial intelligence will be humanity’s saviour. But they also assume AI entities will be benevolent”
What a disheartening article. The whole thing can be summed up with a quote from Three Major Singularity Schools:
Reading this article and the comments section really drove home how important rationality skills are when thinking about the future.
Agreed. I hope you (and other LW people) contribute to the discussion to try and correct some of these misconceptions.
It is an important reminder of how strange and scary these ideas seem at first glance and the inferential distances involved.
How would you estimate the percentage of LWers in the Singularitarian movement? Maybe most Singularitarians really are that clueless.
If you google Singularitarian, the obsolete singularitarian principles document on yudowsky.net is the second link. It would be good if the obsolete notice steered the reader to more current sources including LessWrong.
A nice A.I. themed movie for those who’ve never seen it: http://www.youtube.com/watch?v=vn0cz7vYOcc all 10 parts on youtube