I am a bit confused on this being “disappointing” to people, maybe because it is not a list that is enough and it is far from complete/enough? I would also be very concerned if OpenAI does not actually care about these, but only did this for PR values (seems some other companies could do this). Otherwise, these are also concrete risks that are happening, actively harming people and need to be addressed. These practices also set up good examples/precedents for regulations and developing with safety mindset. Linking a few resources:
I am a bit confused on this being “disappointing” to people, maybe because it is not a list that is enough and it is far from complete/enough? I would also be very concerned if OpenAI does not actually care about these, but only did this for PR values (seems some other companies could do this). Otherwise, these are also concrete risks that are happening, actively harming people and need to be addressed. These practices also set up good examples/precedents for regulations and developing with safety mindset. Linking a few resources:
child safety:
https://cyber.fsi.stanford.edu/news/ml-csam-report
https://www.iwf.org.uk/media/nadlcb1z/iwf-ai-csam-report_update-public-jul24v13.pdf
private information/PII:
https://arxiv.org/html/2410.06704v1
https://arxiv.org/abs/2310.07298
deep fakes:
https://www.pbs.org/newshour/world/in-south-korea-rise-of-explicit-deepfakes-wrecks-womens-lives-and-deepens-gender-divide
https://www.nytimes.com/2024/09/03/world/asia/south-korean-teens-deepfake-sex-images.html
bias:
https://arxiv.org/html/2405.01724v1
https://arxiv.org/pdf/2311.18140