I’d say that the Moloch thing isn’t that much more weird than our other local eschatological shorthands (“FAI”, “Great Filter”, etc.). That’s just my insider’s perspective, though, so take with many grains of salt.
I believe you’re right. I’m not familiar with the Great Filter being lambasted outside of Less Wrong, but myself, and the people I know personally, have generally discussed the Great Filter less than we have Friendly A.I. On one hand, the Great Filter seems more associated with Overcoming Bias, so coverage of it is tangential to, and having a neutral impact upon, Less Wrong. On the other hand, I spend more time on this website, so my impression could be due to the availability heuristic only. In that case, please share any outside media coverage you know of the Great Filter.
Anyway, I chose Moloch to stay current, and also because citing a baby-eating demon as the destroyer of the world seems even more eschatological than Friendly A.I. So, Moloch strikes me as potentially even more prone to misinterpretation. (un)Friendly A.I. has already been wholly conflated with a scandal about a counterfactual monster that need not be named. It seems to me that that could snowball into misinterpretations of Moloch bouncing across the blogosphere like a game of Broken Telephone until there’s a widely-read article about Less Wrong having gone about atheism, and rationalism, so thoroughly wrong that it flipped back around to ancient religions.
The fact that Less Wrong periodically has to do damage control because there is even anything on this website that can be misinterpreted as eschatology seems demonstrative of a persistent image problem. Morosely, the fact that the outside perspective misinterprets something from this site as dangerous eschatology, perhaps because someone would have to read lots of now relatively obscure blog posts to otherwise grok it, doesn’t surprise me too much.
It seems to me that that could snowball into misinterpretations of Moloch bouncing across the blogosphere like a game of Broken Telephone until there’s a widely-read article about Less Wrong having gone about atheism, and rationalism, so thoroughly wrong that it flipped back around to ancient religions.
This seems pretty unlikely to me. I think the key difference from the uFAI thing is that if you ask a Less Wrong regular “So what is this ‘unFriendly AI’ thing you all talk about? It can’t possibly be as ridiculous as what that article on Slate was saying, can it?”*, then the answer you get will probably sound exactly as silly as the caricature, if not worse, and you’re likely to conclude that LW is some kind of crazy cult or something.
On the other hand, if you ask your LW-regular friend “So what’s the deal with this ‘Moloch’ thing? You guys don’t really believe in baby-easting demons, do you?”, they’ll say something like “What? No, of course not. We just use ‘Moloch’ as a sort of metaphorical shorthand for a certain kind of organizational failure. It comes from this great essay, you should read it...” which is all perfectly reasonable and will do nothing to perpetuate the original ludicrous rumor. Nobody will say “I know somebody who goes on LessWrong, and he says they really do worship the blasphemous gods of ancient Mesopotamia!”, so the rumor will have much less plausibility and will be easily debunked.
* Overall this is of course an outdated example, since MIRI/FHI/etc. have pulled a spectacular public makeover of Friendly AI in the past year or so.
I’d say that the Moloch thing isn’t that much more weird than our other local eschatological shorthands (“FAI”, “Great Filter”, etc.). That’s just my insider’s perspective, though, so take with many grains of salt.
I believe you’re right. I’m not familiar with the Great Filter being lambasted outside of Less Wrong, but myself, and the people I know personally, have generally discussed the Great Filter less than we have Friendly A.I. On one hand, the Great Filter seems more associated with Overcoming Bias, so coverage of it is tangential to, and having a neutral impact upon, Less Wrong. On the other hand, I spend more time on this website, so my impression could be due to the availability heuristic only. In that case, please share any outside media coverage you know of the Great Filter.
Anyway, I chose Moloch to stay current, and also because citing a baby-eating demon as the destroyer of the world seems even more eschatological than Friendly A.I. So, Moloch strikes me as potentially even more prone to misinterpretation. (un)Friendly A.I. has already been wholly conflated with a scandal about a counterfactual monster that need not be named. It seems to me that that could snowball into misinterpretations of Moloch bouncing across the blogosphere like a game of Broken Telephone until there’s a widely-read article about Less Wrong having gone about atheism, and rationalism, so thoroughly wrong that it flipped back around to ancient religions.
The fact that Less Wrong periodically has to do damage control because there is even anything on this website that can be misinterpreted as eschatology seems demonstrative of a persistent image problem. Morosely, the fact that the outside perspective misinterprets something from this site as dangerous eschatology, perhaps because someone would have to read lots of now relatively obscure blog posts to otherwise grok it, doesn’t surprise me too much.
This seems pretty unlikely to me. I think the key difference from the uFAI thing is that if you ask a Less Wrong regular “So what is this ‘unFriendly AI’ thing you all talk about? It can’t possibly be as ridiculous as what that article on Slate was saying, can it?”*, then the answer you get will probably sound exactly as silly as the caricature, if not worse, and you’re likely to conclude that LW is some kind of crazy cult or something.
On the other hand, if you ask your LW-regular friend “So what’s the deal with this ‘Moloch’ thing? You guys don’t really believe in baby-easting demons, do you?”, they’ll say something like “What? No, of course not. We just use ‘Moloch’ as a sort of metaphorical shorthand for a certain kind of organizational failure. It comes from this great essay, you should read it...” which is all perfectly reasonable and will do nothing to perpetuate the original ludicrous rumor. Nobody will say “I know somebody who goes on LessWrong, and he says they really do worship the blasphemous gods of ancient Mesopotamia!”, so the rumor will have much less plausibility and will be easily debunked.
* Overall this is of course an outdated example, since MIRI/FHI/etc. have pulled a spectacular public makeover of Friendly AI in the past year or so.