That interview seems totally fine? Honestly Yejin Choi is the kind of person people are thinking ML research is full of when they say “alignment isn’t a problem because surely all the AI researchers will realize that building AI that wants to do bad things is stupid.” She recognizes the basic problem and is approaching it from the “establishment” side, and while this comes with some blind spots, I think Delphi was absolutely useful alignment work.
Is news reporting on AI safety worse than news reporting on everything else? Perhaps the right question is simply “why journalism sucks?”, and it is unrelated to AI safety as such.
the overwhelming majority of news articles covering AI depict AI safety as negatively as possible, and have for nearly a decade now
That’s a strong statement that doesn’t match my perception. Do you have any statistics? E.g. fraction of AI Safety-related posts in “reputable news outlets” that portray it negatively? Ideally broken down by publication.
That interview seems totally fine? Honestly Yejin Choi is the kind of person people are thinking ML research is full of when they say “alignment isn’t a problem because surely all the AI researchers will realize that building AI that wants to do bad things is stupid.” She recognizes the basic problem and is approaching it from the “establishment” side, and while this comes with some blind spots, I think Delphi was absolutely useful alignment work.
Is news reporting on AI safety worse than news reporting on everything else? Perhaps the right question is simply “why journalism sucks?”, and it is unrelated to AI safety as such.
That’s a strong statement that doesn’t match my perception. Do you have any statistics? E.g. fraction of AI Safety-related posts in “reputable news outlets” that portray it negatively? Ideally broken down by publication.