This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”. I have never seen anyone claim that andressen and Lecunn do not truly believe their arguments. I also legitimately think that x-risk arguments are incorrect, am I conducting an “infowar”? Adopting this viewpoint seems like it would blind you to legitimate arguments from the other side.
That’s not to say you can’t point out errors in argumentations, or point out how the Lecunn and andressen have financial incentives that may be blinding their judgments. But I think this comment crosses the line into counterproductive “Us vs them” tribalism.
This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”.
I don’t think the comment you’re responding to is doing this; I think it’s straightforwardly accusing LeCun and Andreesen of conducting an infowar against AI safety. It also doesn’t claim that they don’t believe their own arguments.
Now, the “deliberate infowar in service of accelerationism” framing seems mostly wrong to me (at least with respect to LeCun; I wouldn’t be surprised if there was a bit of that going on elsewhere), but sometimes that is a thing that happens and we need to be able to discuss whether that’s happening in any given instance. re: your point about tribalism, this does carry risks of various kinds of motivated cognition, but the correct answer is not to cordon off a section of reality and declare it off-limits for discussion.
This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”. I have never seen anyone claim that andressen and Lecunn do not truly believe their arguments. I also legitimately think that x-risk arguments are incorrect, am I conducting an “infowar”? Adopting this viewpoint seems like it would blind you to legitimate arguments from the other side.
That’s not to say you can’t point out errors in argumentations, or point out how the Lecunn and andressen have financial incentives that may be blinding their judgments. But I think this comment crosses the line into counterproductive “Us vs them” tribalism.
I don’t think the comment you’re responding to is doing this; I think it’s straightforwardly accusing LeCun and Andreesen of conducting an infowar against AI safety. It also doesn’t claim that they don’t believe their own arguments.
Now, the “deliberate infowar in service of accelerationism” framing seems mostly wrong to me (at least with respect to LeCun; I wouldn’t be surprised if there was a bit of that going on elsewhere), but sometimes that is a thing that happens and we need to be able to discuss whether that’s happening in any given instance. re: your point about tribalism, this does carry risks of various kinds of motivated cognition, but the correct answer is not to cordon off a section of reality and declare it off-limits for discussion.