I should make it explicit that the original post didn’t advocate terrorism in any way but was a hypothetical reductio ad absurdum against utilitarianism that was obviously meant for philosophical consideration only.
It was nothing as simple as a philosophical argument against anything.
It is a line of reasoning working from premises that seem to be widely held, that I am unsure of how to integrate into my world view in a way that I (or most people?) would be comfortable with.
I don’t believe that you are honest in what you write here. If you would really vote against the bombing of Skynet before it tiles the universe with paperclips, then I don’t think you actually believe most of what is written on LW.
Terrorism is just a word to discredit acts that are deemed bad by those that oppose it.
If I was really sure that Al Qaeda was going to set free some superbug bioweapon stored in a school and there was no way to stop them doing so and kill millions then I would advocate using incendiary bombs on the school to destroy the weapons. I accept the position that even killing one person can’t be a mean to an end to save the whole world, but I don’t see how that fits with what is believed in this community. See Torture vs. Dust Specks (The obvious answer is TORTURE, Robin Hanson).
I’ll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity. -- Eliezer Yudkowsky
Hush, hush! Of course I know it is bad to talk about it in this way. Same with what Roko wrote. The amount of things we shouldn’t talk about, even though they are completely rational, seems to be rising. I just don’t have the list of forbidden topics at hand right now.
I don’t think this is a solution. You better come up with some story why you people don’t think killing is wrong to prevent Skynet, because the idea of AI going FOOM is getting mainstream quickly and people will draw this conclusion and act upon it. Or you stand to what you believe and try to explain why it wouldn’t be terrorism but a far-seeing act to slow down AI research or at least watch over it and take out any dangerous research before FAI isn’t guaranteed.
A lot of the comments on this post were really confusing until I got to this one.
I should make it explicit that the original post didn’t advocate terrorism in any way but was a hypothetical reductio ad absurdum against utilitarianism that was obviously meant for philosophical consideration only.
It was nothing as simple as a philosophical argument against anything.
It is a line of reasoning working from premises that seem to be widely held, that I am unsure of how to integrate into my world view in a way that I (or most people?) would be comfortable with.
I don’t believe that you are honest in what you write here. If you would really vote against the bombing of Skynet before it tiles the universe with paperclips, then I don’t think you actually believe most of what is written on LW.
Terrorism is just a word to discredit acts that are deemed bad by those that oppose it.
If I was really sure that Al Qaeda was going to set free some superbug bioweapon stored in a school and there was no way to stop them doing so and kill millions then I would advocate using incendiary bombs on the school to destroy the weapons. I accept the position that even killing one person can’t be a mean to an end to save the whole world, but I don’t see how that fits with what is believed in this community. See Torture vs. Dust Specks (The obvious answer is TORTURE, Robin Hanson).
You missed the point. He said it was bad to talk about, not that he agreed or disagreed with any particular statement.
Hush, hush! Of course I know it is bad to talk about it in this way. Same with what Roko wrote. The amount of things we shouldn’t talk about, even though they are completely rational, seems to be rising. I just don’t have the list of forbidden topics at hand right now.
I don’t think this is a solution. You better come up with some story why you people don’t think killing is wrong to prevent Skynet, because the idea of AI going FOOM is getting mainstream quickly and people will draw this conclusion and act upon it. Or you stand to what you believe and try to explain why it wouldn’t be terrorism but a far-seeing act to slow down AI research or at least watch over it and take out any dangerous research before FAI isn’t guaranteed.