Skimming through. May or may not post an in-depth comment later, but for the time being, this stood out to me:
I think it would only be relevant in a fantasy world in which people would be smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards.
I note that Yann has not actually specified a way of not “giving [the AI] moronic objectives with no safeguards”. The argument of AI risk advocates is precisely that the thing in quotes in the previous sentence is difficult to do, and that people do not have to be “ridiculously stupid” to fail at it—as evidenced by the fact that no one has actually come up with a concrete way of doing it yet. It doesn’t look to me like Yann addressed this point anywhere; he seems to be under the impression that repeating his assertion more emphatically (obviously, when we actually get around to building the AI, we’ll use our common sense and build it right) somehow constitutes an argument in favor of said assertion. This seems to be an unusually low-quality line of argument from someone who, from what I’ve seen, is normally much more clear-headed than this.
Nor has anyone come up with a way to make AGI. Perhaps Yann’s assumption is that how to do what he specifies will become more obvious as more about the nature of AGI is known. Maybe from Yann’s perspective, trying to create safe AGI without knowing how AGI will work is like trying to design a nuclear reactor without knowing how nuclear physics works.
Skimming through. May or may not post an in-depth comment later, but for the time being, this stood out to me:
I note that Yann has not actually specified a way of not “giving [the AI] moronic objectives with no safeguards”. The argument of AI risk advocates is precisely that the thing in quotes in the previous sentence is difficult to do, and that people do not have to be “ridiculously stupid” to fail at it—as evidenced by the fact that no one has actually come up with a concrete way of doing it yet. It doesn’t look to me like Yann addressed this point anywhere; he seems to be under the impression that repeating his assertion more emphatically (obviously, when we actually get around to building the AI, we’ll use our common sense and build it right) somehow constitutes an argument in favor of said assertion. This seems to be an unusually low-quality line of argument from someone who, from what I’ve seen, is normally much more clear-headed than this.
Nor has anyone come up with a way to make AGI. Perhaps Yann’s assumption is that how to do what he specifies will become more obvious as more about the nature of AGI is known. Maybe from Yann’s perspective, trying to create safe AGI without knowing how AGI will work is like trying to design a nuclear reactor without knowing how nuclear physics works.
(Not saying I agree with this.)