That interview seems totally fine? Honestly Yejin Choi is the kind of person people are thinking ML research is full of when they say “alignment isn’t a problem because surely all the AI researchers will realize that building AI that wants to do bad things is stupid.” She recognizes the basic problem and is approaching it from the “establishment” side, and while this comes with some blind spots, I think Delphi was absolutely useful alignment work.
That interview seems totally fine? Honestly Yejin Choi is the kind of person people are thinking ML research is full of when they say “alignment isn’t a problem because surely all the AI researchers will realize that building AI that wants to do bad things is stupid.” She recognizes the basic problem and is approaching it from the “establishment” side, and while this comes with some blind spots, I think Delphi was absolutely useful alignment work.