And the idea that intelligent systems will inevitably want to take over, dominate humans, or just destroy humanity through negligence is preposterous. They would have to be specifically designed to do so. Whereas we will obviously design them to not do so.
this was posted after your comment, but i think this is close enough:
@ylecun
I’m most convinced by the second sentence:
Which definitely seems to be dismissing the possibility of alignment failures.
My guess would be that he would back off of this claim if pushed on it explicitly, but I’m not sure. And it is at any rate indicative of his attitude.