Being responsible for something isn’t equivalent to having personally done it. If you’re in charge of something, your responsibility means also that it’s your job to know about the consequences of the things you are in charge of, and if they cause harm it’s your job to direct your subordinates to fix that, and if you fail to do that… the buck stops with you. Because it was your job, not to do the thing, but to make sure that the thing didn’t go astray.
Now, I’m actually not sure whether LeCunn is professionally (and legally) responsible for the consequences of Meta’s botched AI products, also because I don’t know how long he’s been in charge of them. But even if he wasn’t, they are at least close enough to his position that he ought to know about them, and if he doesn’t, that seriously puts in doubt his competence (since how can the Director of AI research at Meta know less about the failure modes of Meta’s AI than me, a random internet user?). And if he knows, he knows perfectly well of a glaring counterexample to his “corporations are aligned” theory. No, corporations are not fully aligned, they keep going astray in all sorts of way as soon as their reward function doesn’t match humanity’s, and the regulators at best play whack-a-mole with them because none of their mistakes have been existential.
Yet.
That we know of.
So, the argument really comes apart at the seams. If tomorrow a corporation was given a chance to push a button that has a 50% chance of multiplying by ten their revenue and 50% chance of destroying the Earth, they would push it, and if it destroys the Earth, new corporate law could never come in time to fix that.
Being responsible for something isn’t equivalent to having personally done it. If you’re in charge of something, your responsibility means also that it’s your job to know about the consequences of the things you are in charge of, and if they cause harm it’s your job to direct your subordinates to fix that, and if you fail to do that… the buck stops with you. Because it was your job, not to do the thing, but to make sure that the thing didn’t go astray.
Now, I’m actually not sure whether LeCunn is professionally (and legally) responsible for the consequences of Meta’s botched AI products, also because I don’t know how long he’s been in charge of them. But even if he wasn’t, they are at least close enough to his position that he ought to know about them, and if he doesn’t, that seriously puts in doubt his competence (since how can the Director of AI research at Meta know less about the failure modes of Meta’s AI than me, a random internet user?). And if he knows, he knows perfectly well of a glaring counterexample to his “corporations are aligned” theory. No, corporations are not fully aligned, they keep going astray in all sorts of way as soon as their reward function doesn’t match humanity’s, and the regulators at best play whack-a-mole with them because none of their mistakes have been existential.
Yet.
That we know of.
So, the argument really comes apart at the seams. If tomorrow a corporation was given a chance to push a button that has a 50% chance of multiplying by ten their revenue and 50% chance of destroying the Earth, they would push it, and if it destroys the Earth, new corporate law could never come in time to fix that.