Yann LeCun has been saying a lot of things on social media recently about this topic, only some of which I’ve read. He’s also written and talked about it several times in the past. Most of what I’ve seen from him recently seems to not be addressing any of the actual arguments, but on the other hand I know he’s discussed this in many forums over several years, and he’s had the arguments spelled out to him so many times by so many people that it’s hard for me to believe he really doesn’t know what the substantive arguments are. Can someone who’s read more of Yann’s arguments on this please give their best understanding of what he’s actually arguing, in a way that will be understandable to people who are familiar with the standard x-risk arguments?
[Question] Steelman / Ideological Turing Test of Yann LeCun’s AI X-Risk argument?
Yann LeCun has been saying a lot of things on social media recently about this topic, only some of which I’ve read. He’s also written and talked about it several times in the past. Most of what I’ve seen from him recently seems to not be addressing any of the actual arguments, but on the other hand I know he’s discussed this in many forums over several years, and he’s had the arguments spelled out to him so many times by so many people that it’s hard for me to believe he really doesn’t know what the substantive arguments are. Can someone who’s read more of Yann’s arguments on this please give their best understanding of what he’s actually arguing, in a way that will be understandable to people who are familiar with the standard x-risk arguments?