You can imagine an argument that goes “Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI.” I have only ever heard people say that someone else’s views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.
Yudkowsky’s position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI.See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)
Well, it’s clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI—in which case, every day they’re not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made—or you think they’re not making progress anyway, so why are you worried?
I strongly disagree with “clearly not true” because there are indirect effects too. It is often the case that indirect effects of violence are much more impactful than direct effects, e.g. compare 9/11 with the resulting wars in Afghanistan & Iraq.
You can imagine an argument that goes “Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI.” I have only ever heard people say that someone else’s views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.
Yudkowsky’s position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI. See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)
Well, it’s clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI—in which case, every day they’re not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made—or you think they’re not making progress anyway, so why are you worried?
I strongly disagree with “clearly not true” because there are indirect effects too. It is often the case that indirect effects of violence are much more impactful than direct effects, e.g. compare 9/11 with the resulting wars in Afghanistan & Iraq.