This is a site devoted to rationality, supposedly. How rational is it to make public statements that can be interpreted as saying people one disagrees with deserve to be shot? It’s hyperbole, and, worse, hyperbole that might be both incitement to violence and possibly self-incriminating if one of those people do get shot.
If the world where $randomAIresearcher, who wasn’t anywhere near achieving hir goal anyway, gets shot, the SIAI is shut down as a terrorist organisation, and you get arrested for incitement to violence, seems optimal to you, then by all means keep making statements like the one above...
This is a site devoted to rationality, supposedly. How rational is it to
Comments of this form are almost always objectionable.
It’s hyperbole, and, worse, hyperbole that might be both incitement to violence and possibly self-incriminating if one of those people do get shot. If the world where $randomAIresearcher, who wasn’t anywhere near achieving hir goal anyway, gets shot, the SIAI is shut down as a terrorist organisation, and you get arrested for incitement to violence, seems optimal to you, then by all means keep making statements like the one above...
Are you trying to be ironic here? You criticize hyperbole while writing that?
No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating—or have already advocated—the murder of scientific researchers. Should any of them get murdered (and as I pointed out in my original comment, which I later redacted in the hope that as the OP had redacted his post this would all blow over, Ben Goertzel has reported getting at least two separate death threats from people who have read the SIAI’s arguments, so this is not as low a probability as we might hope) then the finger will point rather heavily at the people in this thread.
Murdering people is wrong, but advocating murder on the public internet is not just wrong but UTTERLY FUCKING STUPID.
advocating murder on the public internet is not just wrong but UTTERLY FUCKING STUPID.
I of course agree with this, but this consideration is unrelated to the question of what constitutes correct reasoning. For example, it shouldn’t move you to actually take an opposite side in the argument and actively advocate it, and creating an appearance of that doesn’t seem to promise comparable impact.
That is not my only motive. My main motive is that I happen to think that the course of action being advocated would be extremely unwise and not lead to anything like the desired results (and would lead to the undesirable result of more dead people). My secondary motive was, originally, to try to persuade the OP that bringing the subject up at all was an incredibly bad idea, given that people have already been influenced by discussions of this subject to make death threats against an actual person. Trying to stop people making incredibly stupid statements which would incriminate them in the (hopefully) unlikely event of someone actually attempting to kill AI researchers was quite far down the list of reasons.
No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating—or have already advocated—the murder of scientific researchers.
Huh? People here often advocate to kill a completely innocent fat guy to save a few more people. People even advocate to torture someone for 50 years so others don’t get dust specks into their eyes...
The difference is there are no hypothetical fat men who are near train lines. There are, however, really-existing AI researchers who have received death threats as a result of this kind of thinking.
The difference is there are no hypothetical fat men who are near train lines.
What are those thought experiments good for if there are no real-world approximations where they might be useful? What do you expect, absolute certainty? Sometimes consequentialist actions have to be made under uncertainty if the scope of the negative utility involved does outweigh it easily...do you disagree with this?
The problem is, as has been pointed out many times in this thread already, threefold.
Firstly, we do not have perfect information, and nor do our brains operate perfectly—the chances of us knowing for sure that there is no way to stop unfriendly AI other than killing someone are so small they can be discounted. The chances of someone believing that to be the case while it’s not true are significantly higher.
Secondly, even if it’s just being treated as a (thoroughly unpleasant) thought experiment here, there are people who have received death threats as a result of unstable people reading about uFAI. Were any more death threats to be made as a result of unstable people reading this thread, that would be a very bad thing indeed. Were anyone to actually get killed as a result of unstable people reading this thread, that would not only be a bad thing in itself, but it would likely have very bad consequences for the posters in this thread, for the SIAI, for the FHI and so on. This is my own primary reason for arguing so vehemently here—I do not want to see anyone get killed because I didn’t bother to argue against it.
And thirdly, this is meant to be a site about becoming more rational. Whether or not it was ever the rational thing to do (and I cannot conceive of a real-world situation where it would be), it is never a rational thing to talk about killing members of a named, small group on the public internet because if/when anything bad happens to them, the finger will point at those doing the talking. In pointing this out I am trying to help people act more rationally.
I strongly agree that trying to stop uFAI by killing people is a really bad idea. The problem is that this is not the first time the idea is resurfacing and won’t be the last time. All the rational arguments against it are now buried in a downvoted and deleted thread and under some amount of hypocritical outrage.
...it is never a rational thing to talk about killing members of a named, small group on the public internet because if/when anything bad happens to them, the finger will point at those doing the talking.
The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.
Were anyone to actually get killed as a result of unstable people reading this thread...
What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?
I trust you’ll do the right thing. I just wanted to point that out.
All the rational arguments against it are now buried in a downvoted and deleted thread
Exactly right. The comment by CarlShuman is valuable. To the extent that it warrants a thread.
What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?
Passionately suppressing the conversation could also convey a message of “Shush. Don’t tell anyone.” as well as showing you take the idea seriously. This is in stark contrast to signalling that you think the whole idea is just silly, because reasoning like Carl’s is so damn obvious.
I also don’t believe any of the ‘outrage’ in this thread has been ‘hypocritical’ - any more than I believe that those advocating murder have been. Certainly in my own case I have argued against killing anyone, and I have done so consistently—I don’t believe I’ve said anything at all hypocritical here.
“The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.”
I absolutely agree. Personally I don’t go around scaring people about AGI research because I don’t find it scary. I also think Eliezer, at least, has done a reasonable job of distancing himself from ‘extreme measures’.
“What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?”
Unfortunately, there are very few people in this thread making those arguments, and a large number making (in my view extremely bad) arguments for the other side...
advocating murder on the public internet is not just wrong but UTTERLY FUCKING STUPID.
This is not a sane representation of what has been said on this thread. I also note that taking an extreme position against preemptive strikes of any kind you are pitting yourself against the political strategy of most nations on earth and definitely the nation from which most posters originate.
For that matter I also expect state sanctioned military or paramilitary organisations to be the groups likely to carry out any necessary violence for the prevention of AGI apocalypse.
This thread started with a post talking about how we should ‘neutralize’ people who may, possibly, develop AI at some point in the future. You, specifically, replied to “Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.” with “I approve of that sentiment so long as people don’t actually take it literally when the world is at stake.” Others have been saying “The competent resort to violence as soon as it beats the alternatives.”
What, exactly, would you call that if not advocating murder?
Does not get bullet. Never. Never ever never for ever.
Does it get systematic downvoting of 200 of my historic comments? Evidently—whether done by yourself or another. I’m glad I have enough karma to shrug it off but I do hope they stop soon. I have made a lot of comments over the last few years.
Edit: As a suggestion it may be better to scroll back half a dozen pages on the user page before starting a downvote protocol. I was just reading another recent thread I was active in (the social one) and some of the −1s were jarringly out of place. The kind that are never naturally downvoted.
You, specifically, replied to “Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.” with “I approve of that sentiment so long as people don’t actually take it literally when the world is at stake.” Others have been saying “The competent resort to violence as soon as it beats the alternatives.” What, exactly, would you call that if not advocating murder?
Rejecting what is clearly an irrational quote from Eliezer independently of the local context. I believe I have rejected it previously and likely will again whenever anyone choses to quote it. Eliezer should know better than to make general statements that quite clearly do not hold.
Most statements don’t hold in some contexts. Particularly, if you’re advocating an implausible or subtly incorrect claim, it’s easy to find a statement that holds most of the time but not for the claim in question, thus lending it connotational support of the reference class where the statement holds.
Most statements don’t hold in some contexts. Particularly, if you’re advocating an implausible or subtly incorrect claim, it’s easy to find a statement that holds most of the time but not for the claim in question, thus lending it connotational support of the reference class where the statement holds.
I think I agree with what you are saying. As a side note statements that include “Never. Never ever never for ever” need to do better than to ‘hold in some contexts’. Because that is a lot of ‘never’.
Also, I refuse to reply any more to any of your comments, because at least twice that I have noticed you have edited your comment after the reply has been posted, without posting any acknowledgement of same.
at least twice that I have noticed you have edited your comment after the reply has been posted, without posting any acknowledgement of same.
I do this all the time. There is always room for improvement, and notes about edits are ugly. I only leave them on comments that were later discovered to contain errors that matter for the discussion, and in that case I leave the errors in place, only pointing out their presence.
Act on caring about implementation of version history for the comments if you want a better alternative.
That’s reasonable. But I personally consider it to be arguing in bad faith if someone makes a comment, I reply to it, then I go back later and see that it’s been edited to look like I’m replying to something substantially different. Minor edits for spelling or punctuation are reasonable, but introducing entirely new strands of argument, or deleting arguments that were there originally, gives an incorrect impression of what’s actually been said. I’m not going to keep going back and checking every five minutes that the context of my comments hasn’t been utterly changed, so I’m only going to reply in more-or-less stable contexts.
Also, I refuse to reply any more to any of your comments
Thankyou.
because at least twice that I have noticed you have edited your comment after the reply has been posted,
About 1⁄3 comments that I make I think of additional things to say as soon as I press enter. When I start editing within 5 seconds of clicking ‘comment’ I do not consider it necessary to write edit. Given the frequency that would be outright spammy.
without posting any acknowledgement of same.
I have added sentences to several comments here. Nothing has been removed. A few extra words have been included where they were missing, making a sentence outright ungrammatical. This is an acknowledgement and not an apology of any kind.
This is a site devoted to rationality, supposedly. How rational is it to make public statements that can be interpreted as saying people one disagrees with deserve to be shot? It’s hyperbole, and, worse, hyperbole that might be both incitement to violence and possibly self-incriminating if one of those people do get shot. If the world where $randomAIresearcher, who wasn’t anywhere near achieving hir goal anyway, gets shot, the SIAI is shut down as a terrorist organisation, and you get arrested for incitement to violence, seems optimal to you, then by all means keep making statements like the one above...
Comments of this form are almost always objectionable.
Are you trying to be ironic here? You criticize hyperbole while writing that?
No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating—or have already advocated—the murder of scientific researchers. Should any of them get murdered (and as I pointed out in my original comment, which I later redacted in the hope that as the OP had redacted his post this would all blow over, Ben Goertzel has reported getting at least two separate death threats from people who have read the SIAI’s arguments, so this is not as low a probability as we might hope) then the finger will point rather heavily at the people in this thread. Murdering people is wrong, but advocating murder on the public internet is not just wrong but UTTERLY FUCKING STUPID.
I of course agree with this, but this consideration is unrelated to the question of what constitutes correct reasoning. For example, it shouldn’t move you to actually take an opposite side in the argument and actively advocate it, and creating an appearance of that doesn’t seem to promise comparable impact.
That is not my only motive. My main motive is that I happen to think that the course of action being advocated would be extremely unwise and not lead to anything like the desired results (and would lead to the undesirable result of more dead people). My secondary motive was, originally, to try to persuade the OP that bringing the subject up at all was an incredibly bad idea, given that people have already been influenced by discussions of this subject to make death threats against an actual person. Trying to stop people making incredibly stupid statements which would incriminate them in the (hopefully) unlikely event of someone actually attempting to kill AI researchers was quite far down the list of reasons.
Huh? People here often advocate to kill a completely innocent fat guy to save a few more people. People even advocate to torture someone for 50 years so others don’t get dust specks into their eyes...
The difference is there are no hypothetical fat men who are near train lines. There are, however, really-existing AI researchers who have received death threats as a result of this kind of thinking.
What are those thought experiments good for if there are no real-world approximations where they might be useful? What do you expect, absolute certainty? Sometimes consequentialist actions have to be made under uncertainty if the scope of the negative utility involved does outweigh it easily...do you disagree with this?
The problem is, as has been pointed out many times in this thread already, threefold. Firstly, we do not have perfect information, and nor do our brains operate perfectly—the chances of us knowing for sure that there is no way to stop unfriendly AI other than killing someone are so small they can be discounted. The chances of someone believing that to be the case while it’s not true are significantly higher.
Secondly, even if it’s just being treated as a (thoroughly unpleasant) thought experiment here, there are people who have received death threats as a result of unstable people reading about uFAI. Were any more death threats to be made as a result of unstable people reading this thread, that would be a very bad thing indeed. Were anyone to actually get killed as a result of unstable people reading this thread, that would not only be a bad thing in itself, but it would likely have very bad consequences for the posters in this thread, for the SIAI, for the FHI and so on. This is my own primary reason for arguing so vehemently here—I do not want to see anyone get killed because I didn’t bother to argue against it.
And thirdly, this is meant to be a site about becoming more rational. Whether or not it was ever the rational thing to do (and I cannot conceive of a real-world situation where it would be), it is never a rational thing to talk about killing members of a named, small group on the public internet because if/when anything bad happens to them, the finger will point at those doing the talking. In pointing this out I am trying to help people act more rationally.
I strongly agree that trying to stop uFAI by killing people is a really bad idea. The problem is that this is not the first time the idea is resurfacing and won’t be the last time. All the rational arguments against it are now buried in a downvoted and deleted thread and under some amount of hypocritical outrage.
The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.
What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?
I trust you’ll do the right thing. I just wanted to point that out.
Exactly right. The comment by CarlShuman is valuable. To the extent that it warrants a thread.
Passionately suppressing the conversation could also convey a message of “Shush. Don’t tell anyone.” as well as showing you take the idea seriously. This is in stark contrast to signalling that you think the whole idea is just silly, because reasoning like Carl’s is so damn obvious.
I also don’t believe any of the ‘outrage’ in this thread has been ‘hypocritical’ - any more than I believe that those advocating murder have been. Certainly in my own case I have argued against killing anyone, and I have done so consistently—I don’t believe I’ve said anything at all hypocritical here.
“The finger might also point at those who scared people about the dangers of AGI research but never took the effort to publicly distance themselves from extreme measures.”
I absolutely agree. Personally I don’t go around scaring people about AGI research because I don’t find it scary. I also think Eliezer, at least, has done a reasonable job of distancing himself from ‘extreme measures’.
“What if anyone gets killed as a result of not reading this thread because he was never exposed to the arguments of why it would be a really bad idea to violently oppose AGI research?”
Unfortunately, there are very few people in this thread making those arguments, and a large number making (in my view extremely bad) arguments for the other side...
This is not a sane representation of what has been said on this thread. I also note that taking an extreme position against preemptive strikes of any kind you are pitting yourself against the political strategy of most nations on earth and definitely the nation from which most posters originate.
For that matter I also expect state sanctioned military or paramilitary organisations to be the groups likely to carry out any necessary violence for the prevention of AGI apocalypse.
This thread started with a post talking about how we should ‘neutralize’ people who may, possibly, develop AI at some point in the future. You, specifically, replied to “Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.” with “I approve of that sentiment so long as people don’t actually take it literally when the world is at stake.” Others have been saying “The competent resort to violence as soon as it beats the alternatives.” What, exactly, would you call that if not advocating murder?
Does it get systematic downvoting of 200 of my historic comments? Evidently—whether done by yourself or another. I’m glad I have enough karma to shrug it off but I do hope they stop soon. I have made a lot of comments over the last few years.
Edit: As a suggestion it may be better to scroll back half a dozen pages on the user page before starting a downvote protocol. I was just reading another recent thread I was active in (the social one) and some of the −1s were jarringly out of place. The kind that are never naturally downvoted.
Rejecting what is clearly an irrational quote from Eliezer independently of the local context. I believe I have rejected it previously and likely will again whenever anyone choses to quote it. Eliezer should know better than to make general statements that quite clearly do not hold.
Most statements don’t hold in some contexts. Particularly, if you’re advocating an implausible or subtly incorrect claim, it’s easy to find a statement that holds most of the time but not for the claim in question, thus lending it connotational support of the reference class where the statement holds.
I think I agree with what you are saying. As a side note statements that include “Never. Never ever never for ever” need to do better than to ‘hold in some contexts’. Because that is a lot of ‘never’.
Also, I refuse to reply any more to any of your comments, because at least twice that I have noticed you have edited your comment after the reply has been posted, without posting any acknowledgement of same.
I do this all the time. There is always room for improvement, and notes about edits are ugly. I only leave them on comments that were later discovered to contain errors that matter for the discussion, and in that case I leave the errors in place, only pointing out their presence.
Act on caring about implementation of version history for the comments if you want a better alternative.
That’s reasonable. But I personally consider it to be arguing in bad faith if someone makes a comment, I reply to it, then I go back later and see that it’s been edited to look like I’m replying to something substantially different. Minor edits for spelling or punctuation are reasonable, but introducing entirely new strands of argument, or deleting arguments that were there originally, gives an incorrect impression of what’s actually been said. I’m not going to keep going back and checking every five minutes that the context of my comments hasn’t been utterly changed, so I’m only going to reply in more-or-less stable contexts.
As I previously mentioned, I have not deleted anything from comments I have written in this thread.
Thankyou.
About 1⁄3 comments that I make I think of additional things to say as soon as I press enter. When I start editing within 5 seconds of clicking ‘comment’ I do not consider it necessary to write edit. Given the frequency that would be outright spammy.
I have added sentences to several comments here. Nothing has been removed. A few extra words have been included where they were missing, making a sentence outright ungrammatical. This is an acknowledgement and not an apology of any kind.
It’s not true that AGI is an argument. Instead, it is a device. That is simple truth.