Thank you for pointing this out! It seems I wasn’t informed enough about the context. I’ve dug a bit deeper and will update the text to:
Another piece reveals that OpenAI contracted Sama to use Kenyan workers with less than $2 / hour wage ($0.5 / hour average in Nairobi) for toxicity annotation for ChatGPT and undisclosed graphical models, with reports of employee trauma from the explicit and graphical annotation work, union breaking, and false hiring promises. A serious issue.
Thank you for pointing this out! It seems I wasn’t informed enough about the context. I’ve dug a bit deeper and will update the text to:
For some more context, here is the Facebook whistleblower case (and ongoing court proceedings in Kenya with Facebook and Sama) and an earlier MIT Sloan report that doesn’t find super strong positive effects (but is written as such, interestingly enough). We’re talking pay gaps from relocation bonuses, forced night shifts, false hiring promises, supposedly human trafficking as well? Beyond textual annotation, they also seemed to work on graphical annotation.