This could be alleviated somehow by prominent people in the AI risk camp paying at least lisp service to the “AI is dangerous because systemic racist/sexist bias are backed in the training data”. Lesswrong tends to neglect or sneer at those concerns (and similar that I’ve seen in typical left wing medias) but they have probably some semblance of significance—at the very least they come under the concern that who ever wins the AI alignement race will lock in his values for ever and ever*.
* which, to be honest, is almost as scary as the traditional paperclip minimiser if you imagine Xi Jimping or Putin or “random figure of your outgroup you particularly don’t like” wins the race.
This could be alleviated somehow by prominent people in the AI risk camp paying at least lisp service to the “AI is dangerous because systemic racist/sexist bias are backed in the training data”. Lesswrong tends to neglect or sneer at those concerns (and similar that I’ve seen in typical left wing medias) but they have probably some semblance of significance—at the very least they come under the concern that who ever wins the AI alignement race will lock in his values for ever and ever*.
* which, to be honest, is almost as scary as the traditional paperclip minimiser if you imagine Xi Jimping or Putin or “random figure of your outgroup you particularly don’t like” wins the race.
Yes seems sensible