and that that will come with the need for content moderation
It will certainly come with calls for content moderation, but for all the reasons you allude to, the assertion that there will be a need for such moderation seems quite tendentious.
Agreed, good point! Let’s say there will be non-stupid arguments in favor of content moderation. For example (top of my head):
Children need to learn using creating tools; if a video editor was able to generate hard core porn or excessively violent content, it’s tricky to leave them alone with it.
AI doesn’t just generate content, it brings in knowledge. Some knowledge is restricted from circulation for good reasons (This is basically the bio-terrorism argument).
I tend to think it’s better to limit the model’s capability so it fits the use case (e. g. no ability to create porn in software used by children) than having a Llama Guard style moderation tool in the loop supervising both user and model behavior; but I’m still very vague on my own position in the debate. I also don’t know the different approaches people are trying out. I’d really like to read up on it though.
(What I do know: I don’t want the Metas and Googles of the world being in charge of defining the control mechanisms for something that will be involved in basically all of our creative processes.)
It will certainly come with calls for content moderation, but for all the reasons you allude to, the assertion that there will be a need for such moderation seems quite tendentious.
Agreed, good point! Let’s say there will be non-stupid arguments in favor of content moderation. For example (top of my head):
Children need to learn using creating tools; if a video editor was able to generate hard core porn or excessively violent content, it’s tricky to leave them alone with it.
AI doesn’t just generate content, it brings in knowledge. Some knowledge is restricted from circulation for good reasons (This is basically the bio-terrorism argument).
I tend to think it’s better to limit the model’s capability so it fits the use case (e. g. no ability to create porn in software used by children) than having a Llama Guard style moderation tool in the loop supervising both user and model behavior; but I’m still very vague on my own position in the debate. I also don’t know the different approaches people are trying out. I’d really like to read up on it though.
(What I do know: I don’t want the Metas and Googles of the world being in charge of defining the control mechanisms for something that will be involved in basically all of our creative processes.)