I think part of what may be going on here is that the approach to AI that Yann advocates happens to be one that is unusually amenable to alignment. Some discussion here:
https://www.lesswrong.com/posts/EMZeJ7vpfeF4GrWwm/self-supervised-learning-and-agi-safety
I think part of what may be going on here is that the approach to AI that Yann advocates happens to be one that is unusually amenable to alignment. Some discussion here:
https://www.lesswrong.com/posts/EMZeJ7vpfeF4GrWwm/self-supervised-learning-and-agi-safety