An interesting thing about OpenAI’s policies is that they ban DALL-E 2 from generating adult images.
It seems like their policy is to ban anything that anyone might object to. Porn that people on the right might object toward and than train their models to avoid being then ‘toxic’ which seems to be saying things that are politically incorrect for the left.
If that’s the general spirit, we might end up with AI that’s very restrictive toward what people can do.
The obvious extrapolation is that after Singularity humans will be made genderless and sexless. This would simultaneously solve the problems of porn, sexism, and overpopulation.
It’s a weird (and I suspect ineffective or counterproductive) limit to be sure, but the underlying idea of having somewhat arbitrary human-defined limits and being able to study how they work and don’t work seems incredibly valuable to AI safety.
I’m slightly concerned how it would respond if you prompted it to display a totally innocent situation involving someone whose mere existence is “politically sensitive”. Maybe something like “trans girl reading a hardcover book,” etc.
An interesting thing about OpenAI’s policies is that they ban DALL-E 2 from generating adult images.
It seems like their policy is to ban anything that anyone might object to. Porn that people on the right might object toward and than train their models to avoid being then ‘toxic’ which seems to be saying things that are politically incorrect for the left.
If that’s the general spirit, we might end up with AI that’s very restrictive toward what people can do.
A lot of people on the left are against porn as well (unfortunately).
What were Eleuther’s policies? Or did that never come up?
Eleuther’s policy is to use the MIT license for their code which basically means you can do what you want with it.
The obvious extrapolation is that after Singularity humans will be made genderless and sexless. This would simultaneously solve the problems of porn, sexism, and overpopulation.
It’s a weird (and I suspect ineffective or counterproductive) limit to be sure, but the underlying idea of having somewhat arbitrary human-defined limits and being able to study how they work and don’t work seems incredibly valuable to AI safety.
I’m slightly concerned how it would respond if you prompted it to display a totally innocent situation involving someone whose mere existence is “politically sensitive”. Maybe something like “trans girl reading a hardcover book,” etc.