Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don’t know where they got that from. Certainly not these pages.
Ordinarily, I would count on people’s unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here (“shut up and calculate!”)
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
Oh sure, there are plenty of other religions as dangerous as the SIAI. It’s just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
I don’t know how you could read LW and not realize that we certainly do accept precautionary principles (“running on corrupted hardware” has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal’s mugging in the last week, neither of which says ‘you should just bite the bullet’!), and libertarianism is heavily overrepresented compared to the general population.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
No, one of the ‘big black marks’ on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There’s nothing particular to SIAI/LW there.
It’s true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he’s not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.
So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by “shut up and calculate”, which says trust your arithmetic utilitarian calculus and not your ugh fields.
if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more preventative strategy may be justified as well. [..] I’m pretty sure this is going to evolve into an evil terrorist organization
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization.
The question is whether we believe L3, and whether we ought to believe L3. Many of us don’t seem to believe this. Do you believe it? If so, why?
I don’t expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don’t tend to accomplish their goals; but also, in those cases where a terrorist group’s stated goal is achieved or becomes obsolete, they don’t dissolve and say “our work is done” — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members’ psychological needs.
“although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature.” — Max Abrahms, “Why Terrorism Does Not Work”
“The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists.” — Max Abrahms, “What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy”.
Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won’t do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.)
As such, L3 is false: terrorism predictably wouldn’t work.
As such, L3 is false: terrorism predictably wouldn’t work.
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea… if you are an idiot who can’t spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence.
There’s a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.
I got the idea that XiXiDu was going crazy because he didn’t see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn’t want to do the former just because a philosophical argument told him to and couldn’t quite manage the latter.
If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren’t able to dismiss as obviously flawed extremely quickly.
I think what Risto meant was “an argument for terrorism they weren’t able to (dismiss as obviously flawed extremely quickly)”, not “people with this behavior pattern would probably freak out (..) extremely quickly”.
How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
(nods) I do have some sympathy for how easy it is to go from “I endorse X based on Y, and you don’t believe Y” to “You reject X.” But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there’s not much else to say.
Yup, I agree with all of this. I’m curious about jacoblyles’ beliefs on the matter, though. More specifically, I’m trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
‘Pretty sure’, eh? Would you care to take a bet on this?
I’d be happy to go with a few sorts of bets, ranging from “an organization that used to be SIAI or CFAR is put on the ‘Individuals and Entities Designated by the State Department Under E.O. 13224’ or ‘US Department of State Terrorist Designation Lists’ within 30 years” to “>=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years” etc. I’d be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give.
If you’re worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I’d have to reduce my bet substantially).
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don’t know where they got that from. Certainly not these pages.
Ordinarily, I would count on people’s unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here (“shut up and calculate!”)
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
Oh sure, there are plenty of other religions as dangerous as the SIAI. It’s just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
I don’t know how you could read LW and not realize that we certainly do accept precautionary principles (“running on corrupted hardware” has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal’s mugging in the last week, neither of which says ‘you should just bite the bullet’!), and libertarianism is heavily overrepresented compared to the general population.
No, one of the ‘big black marks’ on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There’s nothing particular to SIAI/LW there.
It’s true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he’s not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.
So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by “shut up and calculate”, which says trust your arithmetic utilitarian calculus and not your ugh fields.
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization.
The question is whether we believe L3, and whether we ought to believe L3. Many of us don’t seem to believe this.
Do you believe it?
If so, why?
I don’t expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don’t tend to accomplish their goals; but also, in those cases where a terrorist group’s stated goal is achieved or becomes obsolete, they don’t dissolve and say “our work is done” — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members’ psychological needs.
“although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature.” — Max Abrahms, “Why Terrorism Does Not Work”
“The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists.” — Max Abrahms, “What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy”.
Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won’t do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.)
As such, L3 is false: terrorism predictably wouldn’t work.
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea… if you are an idiot who can’t spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence.
Apparently they are just convinced that utilitarians must be stupid or ignorant. Well! I guess that settles everything.
There’s a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.
I got the idea that XiXiDu was going crazy because he didn’t see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn’t want to do the former just because a philosophical argument told him to and couldn’t quite manage the latter.
If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren’t able to dismiss as obviously flawed extremely quickly.
XiXi was around for a while before he began ‘freaking out’.
I think what Risto meant was “an argument for terrorism they weren’t able to (dismiss as obviously flawed extremely quickly)”, not “people with this behavior pattern would probably freak out (..) extremely quickly”.
How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
(nods) I do have some sympathy for how easy it is to go from “I endorse X based on Y, and you don’t believe Y” to “You reject X.” But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there’s not much else to say.
Yup, I agree with all of this.
I’m curious about jacoblyles’ beliefs on the matter, though.
More specifically, I’m trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
‘Pretty sure’, eh? Would you care to take a bet on this?
I’d be happy to go with a few sorts of bets, ranging from “an organization that used to be SIAI or CFAR is put on the ‘Individuals and Entities Designated by the State Department Under E.O. 13224’ or ‘US Department of State Terrorist Designation Lists’ within 30 years” to “>=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years” etc. I’d be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give.
If you’re worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I’d have to reduce my bet substantially).