I cannot explain the thoughts of others who have read this and chose not to comment.
I would’ve not commented had I not gone through a specific series of ‘not heavily determined’ mental motions.
First, I spent some time in the AI recent news rabbit hole, including an interview with Gwern wherein he spoke very beautifully about the importance of writing.
This prompted me to check back in on LessWrong, to see what people have been writing about recently. I then noticed your post, which I presumably only saw due to a low-karma-content-filter setting I’d disabled.
And this prompted me to think “maybe that’s something I could do on LessWrong to dip my toes into the waters more—replying to extremely downvoted posts on the principle that there are likely people arguing in good faith and yet falling flat on LessWrong due to some difference in taste or misapprehension about what kind of community this is.”
Note, I said: people arguing in good faith.
The tone of this post does not seem “good faith.” At least, not at a glance.
Framing this with the language “legion of doom” is strange and feels extremely unhelpful for having a useful conversation about what is actually true in reality.
It calls to mind disclaimers in old General Semantics literature about “emotionally charged language”—stuff that pokes people in the primate instinct parts of their brain.
That would be my guess as to why this tone feels so compelling to you. It trips those wires in my head as well. It’s fun being cheeky and combative—“debate me bro, my faction vs your faction, let’s fight it out.”
That doesn’t help people actually figure out what’s true in reality. It leads to a lot of wasted time running down chains of thought that have their roots in “I don’t like those guys, I’m gonna destroy their arguments with my impeccable logic and then call them idiots”—which is different than thinking thoughts like “I’m curious about what is true here, and what’s actually true in reality seems important and useful to know, I should try my best to figure out what that is.”
What you’ve written here has a lot of flaws, but my guess is that this is the main one people here would want you to acknowledge or merely not repeat.
Don’t use words like “legion of doom”—people who believe these specific things about AI are not actually a cult. People here will not debate you about this stuff if you taunt them by calling them a ‘legion of doom.’
They will correctly recognize that this pattern-matches to an internet culture phenomenon of people going “debate me bro”—followed by a bunch of really low effort intellectual work, and a frustrating inability to admit mistakes or avoid annoying/uncomfortable tactics.
There may be people out there—I haven’t met them—who believe that they are in a cult (and are happy about that) because they loudly say they agree with Eliezer about AI. They are, I think, wrong about agreeing with Eliezer. If they think this a cult, they have done an abysmal job understanding the message. They have “failed at reading comprehension.” Those people are probably not on LessWrong. You may find them elsewhere, and you can have an unproductive debate with them on a different platform where unproductive debate is sometimes celebrated.
If you want to debate the object-level content of this article with me, a good start would be showing that the points I’ve made so far are well-taken, or give me an account of your worldview where actually the best policy is to say things like “I’m surprised the legion of doom is so quiet?”
Mostly I don’t expect this to result in a productive conversation. I’ve skimmed your post and if I had it on paper I’d have underlined a lot of it with some “this is wrong/weird” color of ink and drawn ”?” symbols in the margins. I wrote this on a whim, which is the reason the ‘legion of doom’ is quiet about this post—you aren’t going to catch many people’s whims with this quality of bait.
I cannot explain the thoughts of others who have read this and chose not to comment.
I would’ve not commented had I not gone through a specific series of ‘not heavily determined’ mental motions.
First, I spent some time in the AI recent news rabbit hole, including an interview with Gwern wherein he spoke very beautifully about the importance of writing.
This prompted me to check back in on LessWrong, to see what people have been writing about recently. I then noticed your post, which I presumably only saw due to a low-karma-content-filter setting I’d disabled.
And this prompted me to think “maybe that’s something I could do on LessWrong to dip my toes into the waters more—replying to extremely downvoted posts on the principle that there are likely people arguing in good faith and yet falling flat on LessWrong due to some difference in taste or misapprehension about what kind of community this is.”
Note, I said: people arguing in good faith.
The tone of this post does not seem “good faith.” At least, not at a glance.
Framing this with the language “legion of doom” is strange and feels extremely unhelpful for having a useful conversation about what is actually true in reality.
It calls to mind disclaimers in old General Semantics literature about “emotionally charged language”—stuff that pokes people in the primate instinct parts of their brain.
That would be my guess as to why this tone feels so compelling to you. It trips those wires in my head as well. It’s fun being cheeky and combative—“debate me bro, my faction vs your faction, let’s fight it out.”
That doesn’t help people actually figure out what’s true in reality. It leads to a lot of wasted time running down chains of thought that have their roots in “I don’t like those guys, I’m gonna destroy their arguments with my impeccable logic and then call them idiots”—which is different than thinking thoughts like “I’m curious about what is true here, and what’s actually true in reality seems important and useful to know, I should try my best to figure out what that is.”
What you’ve written here has a lot of flaws, but my guess is that this is the main one people here would want you to acknowledge or merely not repeat.
Don’t use words like “legion of doom”—people who believe these specific things about AI are not actually a cult. People here will not debate you about this stuff if you taunt them by calling them a ‘legion of doom.’
They will correctly recognize that this pattern-matches to an internet culture phenomenon of people going “debate me bro”—followed by a bunch of really low effort intellectual work, and a frustrating inability to admit mistakes or avoid annoying/uncomfortable tactics.
There may be people out there—I haven’t met them—who believe that they are in a cult (and are happy about that) because they loudly say they agree with Eliezer about AI. They are, I think, wrong about agreeing with Eliezer. If they think this a cult, they have done an abysmal job understanding the message. They have “failed at reading comprehension.” Those people are probably not on LessWrong. You may find them elsewhere, and you can have an unproductive debate with them on a different platform where unproductive debate is sometimes celebrated.
If you want to debate the object-level content of this article with me, a good start would be showing that the points I’ve made so far are well-taken, or give me an account of your worldview where actually the best policy is to say things like “I’m surprised the legion of doom is so quiet?”
Mostly I don’t expect this to result in a productive conversation. I’ve skimmed your post and if I had it on paper I’d have underlined a lot of it with some “this is wrong/weird” color of ink and drawn ”?” symbols in the margins. I wrote this on a whim, which is the reason the ‘legion of doom’ is quiet about this post—you aren’t going to catch many people’s whims with this quality of bait.