Your original question wasn’t about LW. Before we turn this into a debate about finetuning LW’s moral code, shall we consider the big picture? It’s 90 years since the word “robot” was introduced, in a play which already featured the possibility of a machine uprising. It’s over 50 years since “artificial intelligence” was introduced as a new academic discipline. We already live in a world where one state can use a computer virus to disrupt the strategic technical infrastructure of another state. The quest for AI, and the idea of popular resistance to AI, have already been out there in the culture for years.
Furthermore, the LW ethos is pro-AI as well as anti-AI. But if they feel threatened by AIs, the average person will just be anti-AI. The average person isn’t pining for the singularity, but they do want to live. Imagine something like the movement to stop climate change, but it’s a movement to stop the singularity. Such a movement would undoubtedly appropriate any useful sentiments it found here, but its ethos and organization would be quite different. You should be addressing yourself to this future anti-AI, pro-human movement, and explaining to them why anyone who works on any form of AI should be given any freedom to do so at all.
The principles espoused by the majority on this site can be used to justify some very, very bad actions.
1) The probability of someone inventing AI is high
2) The probability of someone inventing unfriendly AI if they are not associated with SIAI is high
3) The utility of inventing unfriendly AI is negative MAXINT
4) “Shut up and calculate”—trust the math and not your gut if your utility calculations tell you to do something that feels awful.
It’s not hard to figure out that Less Wrong’s moral code supports some very, unsavory, actions.
Your original question wasn’t about LW. Before we turn this into a debate about finetuning LW’s moral code, shall we consider the big picture? It’s 90 years since the word “robot” was introduced, in a play which already featured the possibility of a machine uprising. It’s over 50 years since “artificial intelligence” was introduced as a new academic discipline. We already live in a world where one state can use a computer virus to disrupt the strategic technical infrastructure of another state. The quest for AI, and the idea of popular resistance to AI, have already been out there in the culture for years.
Furthermore, the LW ethos is pro-AI as well as anti-AI. But if they feel threatened by AIs, the average person will just be anti-AI. The average person isn’t pining for the singularity, but they do want to live. Imagine something like the movement to stop climate change, but it’s a movement to stop the singularity. Such a movement would undoubtedly appropriate any useful sentiments it found here, but its ethos and organization would be quite different. You should be addressing yourself to this future anti-AI, pro-human movement, and explaining to them why anyone who works on any form of AI should be given any freedom to do so at all.