Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.
No, that isn’t it. LW isn’t at all special in that respect—a huge number of specialized communities exist on the net which talk about “crazy stuff”, but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that’s not really warranted.
Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.
No, that isn’t it. Every self-help book (of which there’s a huge industry, and most of which are complete crap) is “trying to change the way people think”, and nobody sees that as weird. The Khan academy is challenging the school system, and nobody thinks they’re phyggish. Attempts to change the way people think are utterly commonplace, both small-scale and large-scale. And the part about books and public libraries is just weird (what?).
Because people on LW pretend they know some things better that everyone else, and that’s an open challenge that someone should go and kick their butts, preferably literally.
Unwarranted applause lights again. Everybody pretends they know some things better than everyone else. Certainly any community does that rallies around experts on some particular topic. With “preferably literally” you cross over into the whining victimhood territory.
What’s worse, people on LW have the courage to disagree even with some popular people, and that’s pretty much insane.
The self-pandering here is particularly strong, almost middle-school grade stuff.
You’ve done a very poor job trying to explain why LW is accused of being phyggish.
There are no known examples of families broken when a family member refuses to submit to eternal knowledge of the Scriptures. [...] There are no known examples of violence or blackmail towards a former LW participant who decided to stop reading LW. [...] Minus the typical internet procrastination, there are no known examples of people who have lost years of their time and thousands of dollars, ruined their social and professional lives in their blind following of the empty promises LW gave them.
This, on the other hand, is a great, very strong point that everyone who finds themselves wary of (perceived or actual) phyggishness on LW should remind themselves of. I’m thinking of myself in particular, and thank you for this strong reminder, so forcefully phrased. I have to be doing something wrong, since I frequently ponder about this or that comment on LW that seems to exemplify phyggish thinking to me, but I never counter to myself with something like what I just quoted.
But, y’know, if you don’t want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.
Edit: Technically just sabotage, not “terrorism” per se. Not that that would assuage qualms non-negligibly.
On your last point, I have to cite our all-*cough*-wise Professor Quirrell
“Such dangers,” said Professor Quirrell coldly, “are to be discussed in offices like this one, not in speeches. The fools […] are not interested in complications and caution. Present them with anything more nuanced than a rousing cheer, and you will face your war alone.
Yeah, but he didn’t do it right there in that essay. And saying “AI is dangerous, stopping Moore’s Law might help, here’s how fragile semiconductor manufacture is, just saying” still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.
You’re pointing out it doesn’t technically say that, but multiple people coming to that essay have taken it that way. You can say “ha! They’re wrong”, but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.
Yes, I accept that it’s a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I’m not saying anything like “Ha! They’re wrong!” I’m lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.
“Just saying” is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn’t intend lunatics to kill them—if you say “we were just saying”, the courts say “no you really weren’t.”
We don’t have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko’s Forbidden Thread), but “just saying” in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.
As far as I can tell, “just saying” is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don’t recall saying anything about disclaimers, either.
It’s a name for the style of argument: that it’s not advocating people do these things, it’s just saying that uFAI is a problem, slowing Moore’s Law might help and by the way here’s the vulnerabilities of Intel’s setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.
Note that I am not intending to claim that the implication was Gwern’s original intention (as I note way up there, I don’t think it is); I’m saying it’s a property of the text as rendered. And that me saying it’s a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they’re adding 2 and 2 and getting 666.
It’s completely orthogonal to the point that I’m making.
If somebody reads something and comes to a strange conclusion, there’s got to be some sort of five-second level trigger that stops them and says, “Wait, is this really what they’re saying?” The responses to the essay made it evident that there’s a lot of people that failed to have that reaction in that case.
That point is completely independent from any aesthetic/ethical judgments regarding the essay itself. If you want to debate that, I suggest talking to the author, and not me.
I’d have wondered about it myself if I hadn’t had prior evidence that Gwern wasn’t a crazy person, so I’m not convinced that it’s as obviously surface-innocuous as you feel it is. Perhaps I’ve been biased by hearing crazy-nerd stories (and actually going looking for them, ’cos I find them interesting). And I do think the PR disaster potential was something I would class as obvious, even if terrorist threats from web forum postings are statistically bogeyman stories.
I suspect we’ve reached the talking past each other stage.
I understood “just saying” as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience’s inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn’t something you can take for granted.
Meaning does not excuse impact, and on some level you appear to still be making excuses. If you’re going to reason about impressions (I’m not saying that you should, it’s very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.
Talking about one aspect of a problem does not imply that other aspects of the problem are not important. But honestly, that debate is stale and appears to have had little impact on the author. So what’s the point in rehashing all of that?
I agree that it’s not fair to blame LW posters for the problem. However, I can’t think of any route to patching the problem that doesn’t involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don’t know where they got that from. Certainly not these pages.
Ordinarily, I would count on people’s unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here (“shut up and calculate!”)
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
Oh sure, there are plenty of other religions as dangerous as the SIAI. It’s just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
I don’t know how you could read LW and not realize that we certainly do accept precautionary principles (“running on corrupted hardware” has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal’s mugging in the last week, neither of which says ‘you should just bite the bullet’!), and libertarianism is heavily overrepresented compared to the general population.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
No, one of the ‘big black marks’ on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There’s nothing particular to SIAI/LW there.
It’s true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he’s not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.
So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by “shut up and calculate”, which says trust your arithmetic utilitarian calculus and not your ugh fields.
if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more preventative strategy may be justified as well. [..] I’m pretty sure this is going to evolve into an evil terrorist organization
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization.
The question is whether we believe L3, and whether we ought to believe L3. Many of us don’t seem to believe this. Do you believe it? If so, why?
I don’t expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don’t tend to accomplish their goals; but also, in those cases where a terrorist group’s stated goal is achieved or becomes obsolete, they don’t dissolve and say “our work is done” — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members’ psychological needs.
“although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature.” — Max Abrahms, “Why Terrorism Does Not Work”
“The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists.” — Max Abrahms, “What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy”.
Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won’t do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.)
As such, L3 is false: terrorism predictably wouldn’t work.
As such, L3 is false: terrorism predictably wouldn’t work.
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea… if you are an idiot who can’t spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence.
There’s a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.
I got the idea that XiXiDu was going crazy because he didn’t see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn’t want to do the former just because a philosophical argument told him to and couldn’t quite manage the latter.
If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren’t able to dismiss as obviously flawed extremely quickly.
I think what Risto meant was “an argument for terrorism they weren’t able to (dismiss as obviously flawed extremely quickly)”, not “people with this behavior pattern would probably freak out (..) extremely quickly”.
How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
(nods) I do have some sympathy for how easy it is to go from “I endorse X based on Y, and you don’t believe Y” to “You reject X.” But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there’s not much else to say.
Yup, I agree with all of this. I’m curious about jacoblyles’ beliefs on the matter, though. More specifically, I’m trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
‘Pretty sure’, eh? Would you care to take a bet on this?
I’d be happy to go with a few sorts of bets, ranging from “an organization that used to be SIAI or CFAR is put on the ‘Individuals and Entities Designated by the State Department Under E.O. 13224’ or ‘US Department of State Terrorist Designation Lists’ within 30 years” to “>=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years” etc. I’d be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give.
If you’re worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I’d have to reduce my bet substantially).
Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match… and then those which don’t.
My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only “weird” is pretty normal. Being “weird + thorough”, for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.
The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying “yeah, we read sci-fi about parallel universes, and we know it’s not real, ha-ha silly us” and saying “actually, our intepretation of quantum physics is right, and you are wrong, that’s the fact, no excuses”. This is the part that makes people angry. You are allowed to take the position of authority only if you are a socially accepted authority. (A university professor is allowed to speak about quantum physics in this manner, a CEO is allowed to speak about money this way, a football champion is allowed to speak about football this way, etc.) This is breaking a social rule, and it has consequences.
Every self-help book (of which there’s a huge industry, and most of which are complete crap) is “trying to change the way people think”, and nobody sees that as weird.
A self-help book is safe. A self-help organization, not so much. (I mean an organization of people trying to change themselves, such as Alcoholics Anonymous, not a self-help publishing/selling company.)
The Khan academy is challenging the school system, and nobody thinks they’re phyggish.
They are supplementing the school system, not criticizing it. The schools can safely ignore them. Khan Academy is admired by some people, but generally it remains at the bottom of the pecking order. This would change for example if they started openly criticizing the school system, and telling people to take their children away from schools.
Generally I think that when people talk about phygs, the reason is that their instinct is saying: “inside of your group, a strong subgroup is forming”. A survival reaction is to call attention of the remaining group members to destroy this subgroup together before it becomes strong enough. You can avoid this reaction if the subgroup signals weakness, or if it signals loyalty to the currect group leadership; in both cases, the subgroup does not threaten existing order.
Assuming this instinct is real, we can’t change it; we can just avoid triggering the reaction. How exactly? One way is to signal harmlessness; but this seems incompatible with our commitment to truth and the spirit of tsuyoku naritai. Other way is to fall below radar by using an obscure technical speach; but this seems incompatible with our goal of raising the sanity waterline (we must be comprehensive to public). Yet other way is to signal loyalty to the regime, such as Singularity Institute publishing in peer-reviewed journals. Even this is difficult, because irrationality is very popular, so by attacking irrationality we inevitable attack many popular things. We should choose our battles wisely. But this is the way I would prefer. Perhaps there is yet another way that I forgot.
Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match… and then those which don’t.
If the phyg-meme gets really bad we can just rename the site “lessharmful.com″.
No, that isn’t it. Every self-help book (of which there’s a huge industry, and most of which are complete crap) is “trying to change the way people think”, and nobody sees that as weird.
No, that isn’t it. LW isn’t at all special in that respect—a huge number of specialized communities exist on the net which talk about “crazy stuff”, but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that’s not really warranted.
No, that isn’t it. Every self-help book (of which there’s a huge industry, and most of which are complete crap) is “trying to change the way people think”, and nobody sees that as weird. The Khan academy is challenging the school system, and nobody thinks they’re phyggish. Attempts to change the way people think are utterly commonplace, both small-scale and large-scale. And the part about books and public libraries is just weird (what?).
Unwarranted applause lights again. Everybody pretends they know some things better than everyone else. Certainly any community does that rallies around experts on some particular topic. With “preferably literally” you cross over into the whining victimhood territory.
The self-pandering here is particularly strong, almost middle-school grade stuff.
You’ve done a very poor job trying to explain why LW is accused of being phyggish.
This, on the other hand, is a great, very strong point that everyone who finds themselves wary of (perceived or actual) phyggishness on LW should remind themselves of. I’m thinking of myself in particular, and thank you for this strong reminder, so forcefully phrased. I have to be doing something wrong, since I frequently ponder about this or that comment on LW that seems to exemplify phyggish thinking to me, but I never counter to myself with something like what I just quoted.
It’s not the Googleability of “phyg”. One recent real-life example is a programmer who emailed me deeply concerned (because I wrote large chunks of the RW article on LW). They were seriously worried about LessWrong’s potential for decompartmentalising really bad ideas, given the strong local support for complete decompartmentalisation, by this detailed exploration of how to destroy semiconductor manufacture to head off the uFAI. I had to reassure them that Gwern really is not a crazy person and had no intention of sabotaging Intel worldwide, but was just exploring the consequences of local ideas. (I’m not sure this succeeded in reassuring them.)
But, y’know, if you don’t want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.
Edit: Technically just sabotage, not “terrorism” per se. Not that that would assuage qualms non-negligibly.
On your last point, I have to cite our all-*cough*-wise Professor Quirrell
Nevermind that there were no actual plans for destroying fabs, and that the whole “terrorist plot” seems to be a collective hallucination.
Nevermind that the author in question has exhaustively argued that terrorism is ineffective.
Yeah, but he didn’t do it right there in that essay. And saying “AI is dangerous, stopping Moore’s Law might help, here’s how fragile semiconductor manufacture is, just saying” still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.
You’re pointing out it doesn’t technically say that, but multiple people coming to that essay have taken it that way. You can say “ha! They’re wrong”, but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.
Yes, I accept that it’s a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I’m not saying anything like “Ha! They’re wrong!” I’m lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.
“Just saying” is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn’t intend lunatics to kill them—if you say “we were just saying”, the courts say “no you really weren’t.”
We don’t have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko’s Forbidden Thread), but “just saying” in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.
As far as I can tell, “just saying” is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don’t recall saying anything about disclaimers, either.
So what are you really trying to say here?
It’s a name for the style of argument: that it’s not advocating people do these things, it’s just saying that uFAI is a problem, slowing Moore’s Law might help and by the way here’s the vulnerabilities of Intel’s setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.
Note that I am not intending to claim that the implication was Gwern’s original intention (as I note way up there, I don’t think it is); I’m saying it’s a property of the text as rendered. And that me saying it’s a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they’re adding 2 and 2 and getting 666.
It’s completely orthogonal to the point that I’m making.
If somebody reads something and comes to a strange conclusion, there’s got to be some sort of five-second level trigger that stops them and says, “Wait, is this really what they’re saying?” The responses to the essay made it evident that there’s a lot of people that failed to have that reaction in that case.
That point is completely independent from any aesthetic/ethical judgments regarding the essay itself. If you want to debate that, I suggest talking to the author, and not me.
I’d have wondered about it myself if I hadn’t had prior evidence that Gwern wasn’t a crazy person, so I’m not convinced that it’s as obviously surface-innocuous as you feel it is. Perhaps I’ve been biased by hearing crazy-nerd stories (and actually going looking for them, ’cos I find them interesting). And I do think the PR disaster potential was something I would class as obvious, even if terrorist threats from web forum postings are statistically bogeyman stories.
I suspect we’ve reached the talking past each other stage.
I understood “just saying” as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience’s inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn’t something you can take for granted.
Meaning does not excuse impact, and on some level you appear to still be making excuses. If you’re going to reason about impressions (I’m not saying that you should, it’s very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.
I’m not making excuses.
Talking about one aspect of a problem does not imply that other aspects of the problem are not important. But honestly, that debate is stale and appears to have had little impact on the author. So what’s the point in rehashing all of that?
I agree that it’s not fair to blame LW posters for the problem. However, I can’t think of any route to patching the problem that doesn’t involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.
Anyway, we shouldn’t make it too easy for people to get the false conlusion, and we should err on side of caution.
Having said this, I join your lamentations.
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don’t know where they got that from. Certainly not these pages.
Ordinarily, I would count on people’s unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here (“shut up and calculate!”)
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
Oh sure, there are plenty of other religions as dangerous as the SIAI. It’s just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
I don’t know how you could read LW and not realize that we certainly do accept precautionary principles (“running on corrupted hardware” has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal’s mugging in the last week, neither of which says ‘you should just bite the bullet’!), and libertarianism is heavily overrepresented compared to the general population.
No, one of the ‘big black marks’ on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There’s nothing particular to SIAI/LW there.
It’s true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he’s not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.
So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by “shut up and calculate”, which says trust your arithmetic utilitarian calculus and not your ugh fields.
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization.
The question is whether we believe L3, and whether we ought to believe L3. Many of us don’t seem to believe this.
Do you believe it?
If so, why?
I don’t expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don’t tend to accomplish their goals; but also, in those cases where a terrorist group’s stated goal is achieved or becomes obsolete, they don’t dissolve and say “our work is done” — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members’ psychological needs.
“although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature.” — Max Abrahms, “Why Terrorism Does Not Work”
“The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists.” — Max Abrahms, “What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy”.
Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won’t do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.)
As such, L3 is false: terrorism predictably wouldn’t work.
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea… if you are an idiot who can’t spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence.
Apparently they are just convinced that utilitarians must be stupid or ignorant. Well! I guess that settles everything.
There’s a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.
I got the idea that XiXiDu was going crazy because he didn’t see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn’t want to do the former just because a philosophical argument told him to and couldn’t quite manage the latter.
If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren’t able to dismiss as obviously flawed extremely quickly.
XiXi was around for a while before he began ‘freaking out’.
I think what Risto meant was “an argument for terrorism they weren’t able to (dismiss as obviously flawed extremely quickly)”, not “people with this behavior pattern would probably freak out (..) extremely quickly”.
How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
(nods) I do have some sympathy for how easy it is to go from “I endorse X based on Y, and you don’t believe Y” to “You reject X.” But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there’s not much else to say.
Yup, I agree with all of this.
I’m curious about jacoblyles’ beliefs on the matter, though.
More specifically, I’m trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
‘Pretty sure’, eh? Would you care to take a bet on this?
I’d be happy to go with a few sorts of bets, ranging from “an organization that used to be SIAI or CFAR is put on the ‘Individuals and Entities Designated by the State Department Under E.O. 13224’ or ‘US Department of State Terrorist Designation Lists’ within 30 years” to “>=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years” etc. I’d be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give.
If you’re worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I’d have to reduce my bet substantially).
Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match… and then those which don’t.
My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only “weird” is pretty normal. Being “weird + thorough”, for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.
The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying “yeah, we read sci-fi about parallel universes, and we know it’s not real, ha-ha silly us” and saying “actually, our intepretation of quantum physics is right, and you are wrong, that’s the fact, no excuses”. This is the part that makes people angry. You are allowed to take the position of authority only if you are a socially accepted authority. (A university professor is allowed to speak about quantum physics in this manner, a CEO is allowed to speak about money this way, a football champion is allowed to speak about football this way, etc.) This is breaking a social rule, and it has consequences.
A self-help book is safe. A self-help organization, not so much. (I mean an organization of people trying to change themselves, such as Alcoholics Anonymous, not a self-help publishing/selling company.)
They are supplementing the school system, not criticizing it. The schools can safely ignore them. Khan Academy is admired by some people, but generally it remains at the bottom of the pecking order. This would change for example if they started openly criticizing the school system, and telling people to take their children away from schools.
Generally I think that when people talk about phygs, the reason is that their instinct is saying: “inside of your group, a strong subgroup is forming”. A survival reaction is to call attention of the remaining group members to destroy this subgroup together before it becomes strong enough. You can avoid this reaction if the subgroup signals weakness, or if it signals loyalty to the currect group leadership; in both cases, the subgroup does not threaten existing order.
Assuming this instinct is real, we can’t change it; we can just avoid triggering the reaction. How exactly? One way is to signal harmlessness; but this seems incompatible with our commitment to truth and the spirit of tsuyoku naritai. Other way is to fall below radar by using an obscure technical speach; but this seems incompatible with our goal of raising the sanity waterline (we must be comprehensive to public). Yet other way is to signal loyalty to the regime, such as Singularity Institute publishing in peer-reviewed journals. Even this is difficult, because irrationality is very popular, so by attacking irrationality we inevitable attack many popular things. We should choose our battles wisely. But this is the way I would prefer. Perhaps there is yet another way that I forgot.
If the phyg-meme gets really bad we can just rename the site “lessharmful.com″.
Seriously?
Which part of my comment are you incredulous about?
That nobody sees self-help books as weird or cultlike.
redacted