[LINK] Terrorists target AI researchers
Something a number of LWers should probably be cautious of.
http://www.nature.com/news/2011/110822/full/476373a.html
- 17 Sep 2011 21:20 UTC; 11 points) 's comment on Terrorists attacking robotics and nanotech researchers by (
- 19 Sep 2011 2:46 UTC; 9 points) 's comment on Anti-technology terrorists cite computer science and nanotechnology as main offenders by (
- 19 Sep 2011 2:44 UTC; 1 point) 's comment on Anti-technology terrorists cite computer science and nanotechnology as main offenders by (
DO NOT USE YOUR REGULAR IDENTITY TO SAY ANYTHING TRULY INTERESTING ON THIS THREAD, OR ON THIS TOPIC, UNLESS YOU HAVE THOUGHT ABOUT IT FOR FIVE MINUTES.
You’re paranoid. We’re only speculating on the motives, identity, and whereabouts of a serial killer, in a public forum. What could possibly go wrong?
In general, you would be advised not to say anything on the Internet unless you have thought about it for at least five minutes.
Why not? You just did. I’m going to post here with my name even if it does draw negative attention from a fringe group of terrorists.
Why not? (This is a serious question. I don’t know why not.)
There are two primary issues.
First, regular identities can be linked to actual people. If someone talks about how they support AI and nanotech research in this specific context it could draw the attention of the group in question.
Second, people in this thread may be tempted to discuss whether there is any actual legitimacy to the viewpoints in question. In general, Less Wrong commentators are probably more oblivious than the most people about how frank discussions can lead to bad results even when they are being discussed in a highly hypothetical fashion. For example, having the SIAI associated with even marginal, theoretical support of terrorist activity in this age could lead to be bad results.
One Quirrell point to JoshuaZ for getting both of the reasons, rather than stopping after just one like jimrandomh did.
(I’m going to stop PGP signing these things, because when I did that before, it was a pain working around Markdown, and it ended up having to be in code-format mode, monospaced and not line broken correctly, which was very intrusive. A signed list of all points issued to date will be provided on request, but I will only bother if a request is actually made.)
Heh. If a poster of one of these comments later disappears from LW for any amount of time, this might well become a local meme akin to the Bas-
I remarked elsewhere that, if someone media-savvy could use this to show the USA’s voters that the terrorists hate our Science as well as our freedoms, we might get all manner of space telescopes and stem cell therapies funded.
I like it, but I couldn’t really say that the belief that terrorists hate our freedom led to a great increase in freedom.
Not just terrorists, Mexican, illegal immigrant terrorists.
If we hadn’t already been warned by Quirrel I might start offering advice to anyone who cares about US scientific funding...
A while back, I claimed the Less Wrong username Quirinus Quirrell, and started hosting a long-running, approximate simulation of him in my brain. I have mostly used the account trivially—to play around with crypto-novelties, say mildly offensive things I wouldn’t otherwise, and poke fun at Clippy. Several times I have doubted the wisdom of hosting such a simulation. Quirrell’s values are not my own, and the plans that he generates (which I have never followed) are mostly bad when viewed in terms of my values. However, I have chosen to keep this occasional alter-identity, because he sees things that would otherwise be invisible to me.
Tor and a virtual machine sandbox are strongly recommended for following all links in this comment. Malware is highly probable and intelligence agencies take notice.
All of the primary source documents from this group are in Spanish. The blog “War on Society” has a translation of one of ITS’s manifestos here, plus links to an earlier manifesto, a photo of one of the assembled package bombs, and the original publication in Spanish on the blog Liberacion Total here. Liberacion Total has been accused of being affiliated with ITS for publishing the manifesto, but they put up a notice saying they merely received it by mail. A few interesting observations:
The basic thesis of ITS’s writing is “technology is bad”. It shuffles between talking about different types of technology and different ways, bringing up gray goo, artificial intelligence, animal testing, and environmental contamination.
It is focused almost exclusively on Mexico and Mexican universities.
The original documents replace o/a->x in many words. I saw this in “lxs cientificxs” (“the scientists”) several times, and thought it was meant to be threatening; but on further inspection, I think this is more like those novelty gender-neutral pronouns (“ey”) you sometimes see in English. If you want to use automated translation, you will have to undo this first.
The blogs War on Society, Liberacion Total and culmine appear to be sympathetic.
There are names of specific people and organizations in those documents. Those people should take notice (and probably already have).
SingInst gets one mention on this page, in the middle of some ranting about Facebook being a mind-control tool.
Which Google translates to:
There are some clues in there that could be useful for figuring out who this is. I’m not sure how uncommon the ‘x’ thing is, but it’s probably in his real-name writings too, and it’s easy to search for. His rantings about Facebook indicate he probably had an account at one point but abandoned it. On priors, he’s almost certainly a loner, and the same rant seems to back that up. His understanding of technology seems pretty shallow, which means the manifestos might’ve been sent through insufficiently-anonymized means (though Liberacion Total probably isn’t keen on helping unmask him).
Common enough it seems. “Libertad por lxs pressxs politicxs” is a thing (a facebook group even) and from what I gather, a common graffiti slogan.
More than that, he specifically targeted CS researchers like Gelernter.
hmm, is there a collection of the history of terrorist attacks related to AI?
(Since the linked article doesn’t at a first glance talk about AI researchers, the title should be justified.)
Thanks.
On the other hand, the mission of the SIAI is founded on the belief that if anyone succeeds at AGI without solving the Friendliness problem, they will destroy the world. Eliezer has said in an interview a year or two back that he does not think that anyone currently working on AGI has any chance of succeeding. But if not now, then some day the question will have to be faced:
What do you do if you really believe that someone’s research has a substantial chance of destroying the world?
Go batshit crazy.
If you really believe it, and compensated for biases by all means available and you are a good consequentialist, … fat man .. 5 workers …
I hear SIAI was looking for martial arts skilled people, lol.
Somebody mentioned Aleister Crowley’s quotes on LW a little while ago; so:
-- Magical Diaries of Aleister Crowley : Tunisia 1923 (1996), edited by Stephen Skinner p.21
If one is skeptical of the existence of Thelema or of the validity of these spiritual experiences, then this sounds a lot like religious leaders who say “Sure, believe in Heaven. But don’t commit suicide to get there faster. Or commit homicide to get other people there faster. Or do anything else that contradicts ordinary decency.”
Part of the fun of being right is that when your system contradicts ordinary decency, you get to at least consider siding with your system.
(although hopefully if your system is right you will choose not to, for the right reasons.)
My Crowley background is pretty spotty, but I read that as him generalizing over ethical intersections with religious experience and then specializing to his own faith. It’s not entirely unlike some posts I’ve read here, in fact; the implication seems to be that if some consequence of your religious (i.e. axiomatic; we could substitute decision-theoretic or similarly fundamental) ethics seems to suggest gross violations of common ethics, then it’s more likely that you’ve got the wrong axioms or forgot to carry the one somewhere than that you need to run out and (e.g.) destroy all humans. Which is very much what I’d expect from a rationalist analysis of the topic.
Extraordinary situations call for extraordinary decency
Here is an intuition pump: you see a baby who got hold of his dad’s suitcase nuke and is about to destroy the city. Do you prevent him from pushing the button, even by lethal means? If the answer is yes, then consider Richard’s original question, and confirm that the differences in the two situations stand up to reverse your decision.
On the one hand, yes; on the other hand, I do think I take the risks from UFAI seriously, and have some relevant experience and skill, but still wouldn’t participate in a paramilitary operation against a GAI researcher.
edit: On reflection, this is due to my confidence in my ability to correctly predict the end of the world, and the problem of multiplying low probabilities by large utilities.
You mean lack of confidence, right?
There is a problem that can occur when you are attempting to check all of your biases when contemplating a serious crime.
The risk is, while checking your biases you are exposing yourself to people who would then have the ability to help law enforcement turn you in for that serious crime. And you would presumably be aware of the fact that you can hardly let others capture you, because then you would know that there would be other things that you didn’t blow up as part of your plan to save the world because you weren’t secretive enough.
This means that by checking all of your biases you are boosting the chance of the world be destroyed if it turns out you weren’t biased. And it’s easy to convince yourself that you can’t risk that, so you can’t talk to other people about your plans.
But you can’t thoroughly check your biases by consulting yourself and no one else. It is entirely possible for you to be heavily deluding yourself, having gotten brain damage or gone insane.
So you’re left with the conflicting demands of “I need to talk with other people to verify this is accurate.” and “I need to keep this a secret, so I can implement it if it is accurate.”
As a side question, does it feel like this has a few points that are oddly similar to Pascal’s mugging to anyone else?
As an example, they both seem to have that aspect of “But you simply MUST do this, because the consequences are simply to great not do it, even after accounting for the probabilities?”
A catholic priest couldn’t turn you in, and a smart one probably knows a lot about some kinds of human biases.
That’s not true about the confidentiality of priests… a priest has the same legal obligation to turn in someone that is a danger to themselves or others as a therapist.
Doubt it. The Code of Canon Law states:
If you are convinced that, barring any biases, your calculated course of action is the right one, you could talk to anyone you trusted to be similarly convinced by your arguments. Either they will point out your errors and convince you that you shouldn’t act, or they will not discover any errors and agree to help you with your plans.
Screaming and bleeding and gnashing of teeth; little AI researchers can’t fall asleep ; )
Screaming and bleeding and gnashing of teeth; little AI researchers can’t fall asleep : )