It’s important to note that Lecun and Andreessen (Facebook/Meta board member) are well-established to be currently conducting an infowar against AI safety- they’re pretty committed to accelerating AI and making sure their company, Facebook/Meta, is the company that gets maximum advantage in that race; at all costs, and by any means. As another example, releasing SOTA open-source AI models in order to advance open-source AI and leverage their dominance there, even though open-source AI allows companies in China and other countries to develop better engineering expertise to compete with American AI companies.
Currently, Facebook seems to be competing against OpenAI, Deepmind, and Anthropic for influence over AI policy in DC. Since Facebook/Meta’s closed-source systems are lagging, their strategy seems to be using AI safety as a sacrificial lamb in order to appeal to pro-american-innovation norms that have been very dominant in DC for a while. Their edge there is being the company that credibly committed to steering clear of all that confusing AI safety hogwash.
Obviously, there’s much more to it than what I’ve said here, like investor confidence and other complex and controversial factors which I’m not currently willing to talk about in a public comment. There’s seriously a lot going on with Facebook/Meta and AI, you could spend years researching that rabbit hole and you’d never stop finding things worth finding.
But if anyone decides to write a list of detailed explanations of why Lecun and Andreessen’s arguments are obvious horseshit, you should expect them to follow up by throwing substantial time and money into generating more horseshit counterarguments tailored around your arguments; specifically to look good to policymakers in DC and other influential people who can’t or won’t go into details about the problem itself.
The nice thing is that Lecun and Andreessen seem to be unwilling or unable to lie about superintelligence being feasible at all, they have to admit it’s a real situation, so they can’t just do the usual thing where they appeal to common sense and say it’s all a sci-fi grift. AI safety and Facebook/Meta are debating on even ground here, the vast numbers of people who can’t/won’t entertain the idea of vastly-smarter-than-human AI aren’t going to be participants in either side.
It really shouldn’t surprise people to see high-level figures from Facebook/Meta, of all places, being really well-versed at information warfare; but many people are still approaching this like an honest debate, which it stopped being a long time ago.
This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”. I have never seen anyone claim that andressen and Lecunn do not truly believe their arguments. I also legitimately think that x-risk arguments are incorrect, am I conducting an “infowar”? Adopting this viewpoint seems like it would blind you to legitimate arguments from the other side.
That’s not to say you can’t point out errors in argumentations, or point out how the Lecunn and andressen have financial incentives that may be blinding their judgments. But I think this comment crosses the line into counterproductive “Us vs them” tribalism.
This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”.
I don’t think the comment you’re responding to is doing this; I think it’s straightforwardly accusing LeCun and Andreesen of conducting an infowar against AI safety. It also doesn’t claim that they don’t believe their own arguments.
Now, the “deliberate infowar in service of accelerationism” framing seems mostly wrong to me (at least with respect to LeCun; I wouldn’t be surprised if there was a bit of that going on elsewhere), but sometimes that is a thing that happens and we need to be able to discuss whether that’s happening in any given instance. re: your point about tribalism, this does carry risks of various kinds of motivated cognition, but the correct answer is not to cordon off a section of reality and declare it off-limits for discussion.
It’s important to note that Lecun and Andreessen (Facebook/Meta board member) are well-established to be currently conducting an infowar against AI safety- they’re pretty committed to accelerating AI and making sure their company, Facebook/Meta, is the company that gets maximum advantage in that race; at all costs, and by any means. As another example, releasing SOTA open-source AI models in order to advance open-source AI and leverage their dominance there, even though open-source AI allows companies in China and other countries to develop better engineering expertise to compete with American AI companies.
Currently, Facebook seems to be competing against OpenAI, Deepmind, and Anthropic for influence over AI policy in DC. Since Facebook/Meta’s closed-source systems are lagging, their strategy seems to be using AI safety as a sacrificial lamb in order to appeal to pro-american-innovation norms that have been very dominant in DC for a while. Their edge there is being the company that credibly committed to steering clear of all that confusing AI safety hogwash.
Obviously, there’s much more to it than what I’ve said here, like investor confidence and other complex and controversial factors which I’m not currently willing to talk about in a public comment. There’s seriously a lot going on with Facebook/Meta and AI, you could spend years researching that rabbit hole and you’d never stop finding things worth finding.
But if anyone decides to write a list of detailed explanations of why Lecun and Andreessen’s arguments are obvious horseshit, you should expect them to follow up by throwing substantial time and money into generating more horseshit counterarguments tailored around your arguments; specifically to look good to policymakers in DC and other influential people who can’t or won’t go into details about the problem itself.
The nice thing is that Lecun and Andreessen seem to be unwilling or unable to lie about superintelligence being feasible at all, they have to admit it’s a real situation, so they can’t just do the usual thing where they appeal to common sense and say it’s all a sci-fi grift. AI safety and Facebook/Meta are debating on even ground here, the vast numbers of people who can’t/won’t entertain the idea of vastly-smarter-than-human AI aren’t going to be participants in either side.
It really shouldn’t surprise people to see high-level figures from Facebook/Meta, of all places, being really well-versed at information warfare; but many people are still approaching this like an honest debate, which it stopped being a long time ago.
How do we know this? If it is “well-established”, then by whom and what is their evidence?
It may be worth writing about some of this in a top-level post.
This seems like an epistemically dangerous way of describing the situation that “These people think that AI x-risk arguments are incorrect, and are willing to argue for that position”. I have never seen anyone claim that andressen and Lecunn do not truly believe their arguments. I also legitimately think that x-risk arguments are incorrect, am I conducting an “infowar”? Adopting this viewpoint seems like it would blind you to legitimate arguments from the other side.
That’s not to say you can’t point out errors in argumentations, or point out how the Lecunn and andressen have financial incentives that may be blinding their judgments. But I think this comment crosses the line into counterproductive “Us vs them” tribalism.
I don’t think the comment you’re responding to is doing this; I think it’s straightforwardly accusing LeCun and Andreesen of conducting an infowar against AI safety. It also doesn’t claim that they don’t believe their own arguments.
Now, the “deliberate infowar in service of accelerationism” framing seems mostly wrong to me (at least with respect to LeCun; I wouldn’t be surprised if there was a bit of that going on elsewhere), but sometimes that is a thing that happens and we need to be able to discuss whether that’s happening in any given instance. re: your point about tribalism, this does carry risks of various kinds of motivated cognition, but the correct answer is not to cordon off a section of reality and declare it off-limits for discussion.