Bit stronger than that. LeCunn is lead of Meta’s AI research. Beyond the shiny fancy LLMs, Meta’s most important AI product is its Facebook recommender system. And via blind engagement maximisation, that engine has caused and keeps causing a lot of problems.
Being responsible for something isn’t equivalent to having personally done it. If you’re in charge of something, your responsibility means also that it’s your job to know about the consequences of the things you are in charge of, and if they cause harm it’s your job to direct your subordinates to fix that, and if you fail to do that… the buck stops with you. Because it was your job, not to do the thing, but to make sure that the thing didn’t go astray.
Now, I’m actually not sure whether LeCunn is professionally (and legally) responsible for the consequences of Meta’s botched AI products, also because I don’t know how long he’s been in charge of them. But even if he wasn’t, they are at least close enough to his position that he ought to know about them, and if he doesn’t, that seriously puts in doubt his competence (since how can the Director of AI research at Meta know less about the failure modes of Meta’s AI than me, a random internet user?). And if he knows, he knows perfectly well of a glaring counterexample to his “corporations are aligned” theory. No, corporations are not fully aligned, they keep going astray in all sorts of way as soon as their reward function doesn’t match humanity’s, and the regulators at best play whack-a-mole with them because none of their mistakes have been existential.
Yet.
That we know of.
So, the argument really comes apart at the seams. If tomorrow a corporation was given a chance to push a button that has a 50% chance of multiplying by ten their revenue and 50% chance of destroying the Earth, they would push it, and if it destroys the Earth, new corporate law could never come in time to fix that.
I think for a typical Meta employee your argument makes sense, there are lots of employees bearing little to no blame. But once you have “Chief” anything in your title, that gets to be a much harder argument to support, because everything you do in that kind of role helps steer the broader direction of the company. LeCun is in charge of getting Meta to make better AI. Why is Meta making AI at all? Because the company believes it will increase revenue. How? In part by increasing how much users engage with its products, the same products that EY is pointing out have significant mental health consequences for a subset of those users.
Bit stronger than that. LeCunn is lead of Meta’s AI research. Beyond the shiny fancy LLMs, Meta’s most important AI product is its Facebook recommender system. And via blind engagement maximisation, that engine has caused and keeps causing a lot of problems.
Being responsible for something isn’t equivalent to having personally done it. If you’re in charge of something, your responsibility means also that it’s your job to know about the consequences of the things you are in charge of, and if they cause harm it’s your job to direct your subordinates to fix that, and if you fail to do that… the buck stops with you. Because it was your job, not to do the thing, but to make sure that the thing didn’t go astray.
Now, I’m actually not sure whether LeCunn is professionally (and legally) responsible for the consequences of Meta’s botched AI products, also because I don’t know how long he’s been in charge of them. But even if he wasn’t, they are at least close enough to his position that he ought to know about them, and if he doesn’t, that seriously puts in doubt his competence (since how can the Director of AI research at Meta know less about the failure modes of Meta’s AI than me, a random internet user?). And if he knows, he knows perfectly well of a glaring counterexample to his “corporations are aligned” theory. No, corporations are not fully aligned, they keep going astray in all sorts of way as soon as their reward function doesn’t match humanity’s, and the regulators at best play whack-a-mole with them because none of their mistakes have been existential.
Yet.
That we know of.
So, the argument really comes apart at the seams. If tomorrow a corporation was given a chance to push a button that has a 50% chance of multiplying by ten their revenue and 50% chance of destroying the Earth, they would push it, and if it destroys the Earth, new corporate law could never come in time to fix that.
I think for a typical Meta employee your argument makes sense, there are lots of employees bearing little to no blame. But once you have “Chief” anything in your title, that gets to be a much harder argument to support, because everything you do in that kind of role helps steer the broader direction of the company. LeCun is in charge of getting Meta to make better AI. Why is Meta making AI at all? Because the company believes it will increase revenue. How? In part by increasing how much users engage with its products, the same products that EY is pointing out have significant mental health consequences for a subset of those users.