The difference is there are no hypothetical fat men who are near train lines. There are, however, really-existing AI researchers who have received death threats as a result of this kind of thinking.
The difference is there are no hypothetical fat men who are near train lines.
What are those thought experiments good for if there are no real-world approximations where they might be useful? What do you expect, absolute certainty? Sometimes consequentialist actions have to be made under uncertainty if the scope of the negative utility involved does outweigh it easily...do you disagree with this?
The problem is, as has been pointed out many times in this thread already, threefold.
Firstly, we do not have perfect information, and nor do our brains operate perfectly—the chances of us knowing for sure that there is no way to stop unfriendly AI other than killing someone are so small they can be discounted. The chances of someone believing that to be the case while it’s not true are significantly higher.
Secondly, even if it’s just being treated as a (thoroughly unpleasant) thought experiment here, there are people who have received death threats as a result of unstable people reading about uFAI. Were any more death threats to be made as a result of unstable people reading this thread, that would be a very bad thing indeed. Were anyone to actually get killed as a result of unstable people reading this thread, that would not only be a bad thing in itself, but it would likely have very bad consequences for the posters in this thread, for the SIAI, for the FHI and so on. This is my own primary reason for arguing so vehemently here—I do not want to see anyone get killed because I didn’t bother to argue against it.
And thirdly, this is meant to be a site about becoming more rational. Whether or not it was ever the rational thing to do (and I cannot conceive of a real-world situation where it would be), it is never a rational thing to talk about killing members of a named, small group on the public internet because if/when anything bad happens to them, the finger will point at those doing the talking. In pointing this out I am trying to help people act more rationally.
I strongly agree that trying to stop uFAI by killing people is a really bad idea. The problem is that this is not the first time the idea is resurfacing and won’t be the last time. All the rational arguments against it are now buried in a downvoted and deleted thread and under some amount of hypocritical outrage.
...it is never a rational thing to talk about killing members of a named, small group on the public internet because if/when anything bad happens to them, the finger will point at those doing the talking.
The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.
Were anyone to actually get killed as a result of unstable people reading this thread...
What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?
I trust you’ll do the right thing. I just wanted to point that out.
All the rational arguments against it are now buried in a downvoted and deleted thread
Exactly right. The comment by CarlShuman is valuable. To the extent that it warrants a thread.
What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?
Passionately suppressing the conversation could also convey a message of “Shush. Don’t tell anyone.” as well as showing you take the idea seriously. This is in stark contrast to signalling that you think the whole idea is just silly, because reasoning like Carl’s is so damn obvious.
I also don’t believe any of the ‘outrage’ in this thread has been ‘hypocritical’ - any more than I believe that those advocating murder have been. Certainly in my own case I have argued against killing anyone, and I have done so consistently—I don’t believe I’ve said anything at all hypocritical here.
“The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.”
I absolutely agree. Personally I don’t go around scaring people about AGI research because I don’t find it scary. I also think Eliezer, at least, has done a reasonable job of distancing himself from ‘extreme measures’.
“What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?”
Unfortunately, there are very few people in this thread making those arguments, and a large number making (in my view extremely bad) arguments for the other side...
The difference is there are no hypothetical fat men who are near train lines. There are, however, really-existing AI researchers who have received death threats as a result of this kind of thinking.
What are those thought experiments good for if there are no real-world approximations where they might be useful? What do you expect, absolute certainty? Sometimes consequentialist actions have to be made under uncertainty if the scope of the negative utility involved does outweigh it easily...do you disagree with this?
The problem is, as has been pointed out many times in this thread already, threefold. Firstly, we do not have perfect information, and nor do our brains operate perfectly—the chances of us knowing for sure that there is no way to stop unfriendly AI other than killing someone are so small they can be discounted. The chances of someone believing that to be the case while it’s not true are significantly higher.
Secondly, even if it’s just being treated as a (thoroughly unpleasant) thought experiment here, there are people who have received death threats as a result of unstable people reading about uFAI. Were any more death threats to be made as a result of unstable people reading this thread, that would be a very bad thing indeed. Were anyone to actually get killed as a result of unstable people reading this thread, that would not only be a bad thing in itself, but it would likely have very bad consequences for the posters in this thread, for the SIAI, for the FHI and so on. This is my own primary reason for arguing so vehemently here—I do not want to see anyone get killed because I didn’t bother to argue against it.
And thirdly, this is meant to be a site about becoming more rational. Whether or not it was ever the rational thing to do (and I cannot conceive of a real-world situation where it would be), it is never a rational thing to talk about killing members of a named, small group on the public internet because if/when anything bad happens to them, the finger will point at those doing the talking. In pointing this out I am trying to help people act more rationally.
I strongly agree that trying to stop uFAI by killing people is a really bad idea. The problem is that this is not the first time the idea is resurfacing and won’t be the last time. All the rational arguments against it are now buried in a downvoted and deleted thread and under some amount of hypocritical outrage.
The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.
What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?
I trust you’ll do the right thing. I just wanted to point that out.
Exactly right. The comment by CarlShuman is valuable. To the extent that it warrants a thread.
Passionately suppressing the conversation could also convey a message of “Shush. Don’t tell anyone.” as well as showing you take the idea seriously. This is in stark contrast to signalling that you think the whole idea is just silly, because reasoning like Carl’s is so damn obvious.
I also don’t believe any of the ‘outrage’ in this thread has been ‘hypocritical’ - any more than I believe that those advocating murder have been. Certainly in my own case I have argued against killing anyone, and I have done so consistently—I don’t believe I’ve said anything at all hypocritical here.
“The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.”
I absolutely agree. Personally I don’t go around scaring people about AGI research because I don’t find it scary. I also think Eliezer, at least, has done a reasonable job of distancing himself from ‘extreme measures’.
“What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?”
Unfortunately, there are very few people in this thread making those arguments, and a large number making (in my view extremely bad) arguments for the other side...