I think folks are being less charitable than they could be to LeCun. LeCun’s views and arguments about AI risk are strongly counterintuitive to many people in this community who are steeped in alignment theory. His arguments are also more cursory and less fleshed-out than I would ideally like. But he’s a Turing Award winner, for God’s sake. He’s a co-inventor of deep learning.
LeCun has a rough sketch of a roadmap to AGI, which includes a rough sketch of a plan for alignment and safety. Ivan Vendrov writes:
Broadly, it seems that in a world where LeCun’s architecture becomes dominant, useful AI safety work looks more analogous to the kind of work that goes on now to make self-driving cars safe. It’s not difficult to understand the individual components of a self-driving car or to debug them in isolation, but emergent interactions between the components and a diverse range of environments require massive and ongoing investments in testing and redundancy.
For this reason, LeCun thinks of AI safety as an engineering problem analogous to aviation safety or automotive safety. Conversely, disagreeing with LeCun on AI safety would seem to imply a different view of the technical path to developing AGI.
I think folks are being less charitable than they could be to LeCun. LeCun’s views and arguments about AI risk are strongly counterintuitive to many people in this community who are steeped in alignment theory. His arguments are also more cursory and less fleshed-out than I would ideally like. But he’s a Turing Award winner, for God’s sake. He’s a co-inventor of deep learning.
LeCun has a rough sketch of a roadmap to AGI, which includes a rough sketch of a plan for alignment and safety. Ivan Vendrov writes:
For this reason, LeCun thinks of AI safety as an engineering problem analogous to aviation safety or automotive safety. Conversely, disagreeing with LeCun on AI safety would seem to imply a different view of the technical path to developing AGI.