Yeah, that makes sense. The utopia, the Scandinavian-style social democracy, the Soviet-style communisms all belong to a greater “socialism” superset, just like Friendly AI and the paperclip maximizer both belong to an “artificial intelligence” superset.
And that is also a reason why someone telling “we are ready to build an artificial intelligence tomorrow”, without providing any more details, would make some people here scared. Not because all AIs are wrong; not because we don’t want a kind of AI here; not because we know that their AI would be unfriendly. But simply because the fact that they didn’t specify the details is an evidence that they didn’t think about the details, and thus they are likely to build an unfriendly AI without actually wanting to. Because the prior probability of unfriendly AI is greater than the prior probability of a friendly AI, so if you just blindly hit a point within the “artificial intelligence” space, it is likely to go wrong.
In a similar way, I am concerned that people who want utopia-socialism don’t pay much attention to the details (my evidence is that they don’t find the details worth mentioning), and are probably not aware (or disagree) with my opinion that it is much easier to create a Soviet-style communism than a stable Friendly socialism. I mean, even if your starting group of revolutionaries all have good intentions, you will probably get infiltrated and removed from power by some power-hungry psychopaths, because… that is what homo sapiens usually does. You know, mindkilling, corrupted hardware, conjunction fallacy (all the things that must succeed to build the utopia), and so on. -- And the different opinions may be caused by some people having first-hand experience of the Soviet-style communism (especially with the aspect that many well-meaning people created the system and supported its running, despite the horrible things that happened; partially because the system made it illegal to share information about those horrible things, while supported spreading the good news, whether real or imaginary), and other people not having this experience (but hearing some of the good news).
Yeah, that makes sense. The utopia, the Scandinavian-style social democracy, the Soviet-style communisms all belong to a greater “socialism” superset, just like Friendly AI and the paperclip maximizer both belong to an “artificial intelligence” superset.
And that is also a reason why someone telling “we are ready to build an artificial intelligence tomorrow”, without providing any more details, would make some people here scared. Not because all AIs are wrong; not because we don’t want a kind of AI here; not because we know that their AI would be unfriendly. But simply because the fact that they didn’t specify the details is an evidence that they didn’t think about the details, and thus they are likely to build an unfriendly AI without actually wanting to. Because the prior probability of unfriendly AI is greater than the prior probability of a friendly AI, so if you just blindly hit a point within the “artificial intelligence” space, it is likely to go wrong.
In a similar way, I am concerned that people who want utopia-socialism don’t pay much attention to the details (my evidence is that they don’t find the details worth mentioning), and are probably not aware (or disagree) with my opinion that it is much easier to create a Soviet-style communism than a stable Friendly socialism. I mean, even if your starting group of revolutionaries all have good intentions, you will probably get infiltrated and removed from power by some power-hungry psychopaths, because… that is what homo sapiens usually does. You know, mindkilling, corrupted hardware, conjunction fallacy (all the things that must succeed to build the utopia), and so on. -- And the different opinions may be caused by some people having first-hand experience of the Soviet-style communism (especially with the aspect that many well-meaning people created the system and supported its running, despite the horrible things that happened; partially because the system made it illegal to share information about those horrible things, while supported spreading the good news, whether real or imaginary), and other people not having this experience (but hearing some of the good news).