If you take this incident to its extreme, the important question is what people are willing to do in future based on the argument “it could increase the chance of an AI going wrong...”?
That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.
What was the argument then? This thread suggests my point of view.
Here one of many comments from the thread above and elsewhere indicating that the deletion was due to the risk I mentioned:
I read the article, and it struck me as dangerous. [JoshuaZ 01 August 2010 04:46:39AM]
I’ve just read EY’ comment. It’s indeed mainly about protecting people from themselves causing unfriendly AI to blackmail them. This conclusion is hard to come by since it is deleted without explanation. Still, it’s basically the same argument and quite a few people on LW seem to follow the argument I described, described to start a discussion about how far we want to go.
I noticed there is another deleted comment by EY where he explicitly writes:
″...the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.” [Jul 24, 2010 8:31 AM]
That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.
What was the argument then? This thread suggests my point of view.
Here one of many comments from the thread above and elsewhere indicating that the deletion was due to the risk I mentioned:
I’ve just read EY’ comment. It’s indeed mainly about protecting people from themselves causing unfriendly AI to blackmail them. This conclusion is hard to come by since it is deleted without explanation. Still, it’s basically the same argument and quite a few people on LW seem to follow the argument I described, described to start a discussion about how far we want to go.
Agree in as much as I suggest Xi should revise to “decrease the chance of AI going right”.
I noticed there is another deleted comment by EY where he explicitly writes:
I stand corrected.