Maybe I should clarify what I mean by “content moderation”. I use the term more loosely than just platforms filtering what can be shared. I think we will hear demands for moderation wherever generative AI is used. Example: Microsoft is going to integrate generative AI into Word. The second they do this, they run into all the content moderation questions OpenAI has with ChatGPT. What if I want to write a story about [morally unacceptable thing], will AI assist me in fleshing out the details of [repugnant scene] or not? If not, will it help me to write the rest of the story or will it block me the moment its content filters detect a guideline violation? Who writes the guidelines? Can I disable the content filter, if I disable the AI? If not, will it let me save my work? Will it report me, if I violate specific guidelines? Will I lose access to Word? Are the circumstances law enforcement gets a hint I might be a problem? Who is getting to decide these questions? I think it would be problematic, if the way we answer this is big tech companies just implementing something and us then living with whatever they come up with.
It all boils down to us integrating a tool into our creative processes that can evaluate what we are asking it to do and enforce compliance with whatever rules “we” give it. And it’s not hypothetical. One cold make an argument that ChatGPT is among the more important creative tools at the moment. I use it all the time, privately and for my work. Yet I’m extremely careful what I ask it to write. I self-censor myself and when I work on a story, I write in the full knowledge that both my prompts and the model output are always and automatically monitored. Assuming being augmented by AI becomes the normal creative workflow, this is a lot of power in the hands of those providing the tools. In a way, they control what can be said and shown.
At the moment, I see this as a dilemma! I don’t think we want generative systems to just do whatever anyone is telling them in any situation (think generated videos and young teenagers experimenting with porn or shocking violence), yet implementing controls has all the problems I described above.
All of this is how I’m thinking about it at the moment; and I haven’t done that much thinking yet. I’m looking for the debate about this; but it’s surprisingly hard to find a lot of relevant discussion. Most of it is people defending content moderation in the context of communities trying to offer a safe space for their users and those discussions aren’t that well calibrated to the problem I have in mind.
Maybe I should clarify what I mean by “content moderation”. I use the term more loosely than just platforms filtering what can be shared. I think we will hear demands for moderation wherever generative AI is used. Example: Microsoft is going to integrate generative AI into Word. The second they do this, they run into all the content moderation questions OpenAI has with ChatGPT. What if I want to write a story about [morally unacceptable thing], will AI assist me in fleshing out the details of [repugnant scene] or not? If not, will it help me to write the rest of the story or will it block me the moment its content filters detect a guideline violation? Who writes the guidelines? Can I disable the content filter, if I disable the AI? If not, will it let me save my work? Will it report me, if I violate specific guidelines? Will I lose access to Word? Are the circumstances law enforcement gets a hint I might be a problem? Who is getting to decide these questions? I think it would be problematic, if the way we answer this is big tech companies just implementing something and us then living with whatever they come up with.
It all boils down to us integrating a tool into our creative processes that can evaluate what we are asking it to do and enforce compliance with whatever rules “we” give it. And it’s not hypothetical. One cold make an argument that ChatGPT is among the more important creative tools at the moment. I use it all the time, privately and for my work. Yet I’m extremely careful what I ask it to write. I self-censor myself and when I work on a story, I write in the full knowledge that both my prompts and the model output are always and automatically monitored. Assuming being augmented by AI becomes the normal creative workflow, this is a lot of power in the hands of those providing the tools. In a way, they control what can be said and shown.
At the moment, I see this as a dilemma! I don’t think we want generative systems to just do whatever anyone is telling them in any situation (think generated videos and young teenagers experimenting with porn or shocking violence), yet implementing controls has all the problems I described above.
All of this is how I’m thinking about it at the moment; and I haven’t done that much thinking yet. I’m looking for the debate about this; but it’s surprisingly hard to find a lot of relevant discussion. Most of it is people defending content moderation in the context of communities trying to offer a safe space for their users and those discussions aren’t that well calibrated to the problem I have in mind.