This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”.
I don’t think the comment you’re responding to is doing this; I think it’s straightforwardly accusing LeCun and Andreesen of conducting an infowar against AI safety. It also doesn’t claim that they don’t believe their own arguments.
Now, the “deliberate infowar in service of accelerationism” framing seems mostly wrong to me (at least with respect to LeCun; I wouldn’t be surprised if there was a bit of that going on elsewhere), but sometimes that is a thing that happens and we need to be able to discuss whether that’s happening in any given instance. re: your point about tribalism, this does carry risks of various kinds of motivated cognition, but the correct answer is not to cordon off a section of reality and declare it off-limits for discussion.
I don’t think the comment you’re responding to is doing this; I think it’s straightforwardly accusing LeCun and Andreesen of conducting an infowar against AI safety. It also doesn’t claim that they don’t believe their own arguments.
Now, the “deliberate infowar in service of accelerationism” framing seems mostly wrong to me (at least with respect to LeCun; I wouldn’t be surprised if there was a bit of that going on elsewhere), but sometimes that is a thing that happens and we need to be able to discuss whether that’s happening in any given instance. re: your point about tribalism, this does carry risks of various kinds of motivated cognition, but the correct answer is not to cordon off a section of reality and declare it off-limits for discussion.