It seems to me that that could snowball into misinterpretations of Moloch bouncing across the blogosphere like a game of Broken Telephone until there’s a widely-read article about Less Wrong having gone about atheism, and rationalism, so thoroughly wrong that it flipped back around to ancient religions.
This seems pretty unlikely to me. I think the key difference from the uFAI thing is that if you ask a Less Wrong regular “So what is this ‘unFriendly AI’ thing you all talk about? It can’t possibly be as ridiculous as what that article on Slate was saying, can it?”*, then the answer you get will probably sound exactly as silly as the caricature, if not worse, and you’re likely to conclude that LW is some kind of crazy cult or something.
On the other hand, if you ask your LW-regular friend “So what’s the deal with this ‘Moloch’ thing? You guys don’t really believe in baby-easting demons, do you?”, they’ll say something like “What? No, of course not. We just use ‘Moloch’ as a sort of metaphorical shorthand for a certain kind of organizational failure. It comes from this great essay, you should read it...” which is all perfectly reasonable and will do nothing to perpetuate the original ludicrous rumor. Nobody will say “I know somebody who goes on LessWrong, and he says they really do worship the blasphemous gods of ancient Mesopotamia!”, so the rumor will have much less plausibility and will be easily debunked.
* Overall this is of course an outdated example, since MIRI/FHI/etc. have pulled a spectacular public makeover of Friendly AI in the past year or so.
This seems pretty unlikely to me. I think the key difference from the uFAI thing is that if you ask a Less Wrong regular “So what is this ‘unFriendly AI’ thing you all talk about? It can’t possibly be as ridiculous as what that article on Slate was saying, can it?”*, then the answer you get will probably sound exactly as silly as the caricature, if not worse, and you’re likely to conclude that LW is some kind of crazy cult or something.
On the other hand, if you ask your LW-regular friend “So what’s the deal with this ‘Moloch’ thing? You guys don’t really believe in baby-easting demons, do you?”, they’ll say something like “What? No, of course not. We just use ‘Moloch’ as a sort of metaphorical shorthand for a certain kind of organizational failure. It comes from this great essay, you should read it...” which is all perfectly reasonable and will do nothing to perpetuate the original ludicrous rumor. Nobody will say “I know somebody who goes on LessWrong, and he says they really do worship the blasphemous gods of ancient Mesopotamia!”, so the rumor will have much less plausibility and will be easily debunked.
* Overall this is of course an outdated example, since MIRI/FHI/etc. have pulled a spectacular public makeover of Friendly AI in the past year or so.