Apparently. Stability claimed “numerous safeguards” starting from the data onwards, which cannot simply be disabled by 1 line of Python if nsfw(generated image) then error, as I recall, and while I can’t find any specifics easily about what all of the anti-NSFW countermeasures were, I don’t see anyone claiming to have trivially beaten them. If they did something like filter the training data even more heavily to eliminate all kinds of nudity & suggestive imagery & porn, then ignorance may be baked into the model’s final concepts so heavily you might as well not even bother with SD3 and work on something else.
Last “censoring” of Stable Diffusion was done via the code and could’ve been turned off via 2 lines of code change. Was it done other way this time?
Apparently. Stability claimed “numerous safeguards” starting from the data onwards, which cannot simply be disabled by 1 line of Python
if nsfw(generated image) then error
, as I recall, and while I can’t find any specifics easily about what all of the anti-NSFW countermeasures were, I don’t see anyone claiming to have trivially beaten them. If they did something like filter the training data even more heavily to eliminate all kinds of nudity & suggestive imagery & porn, then ignorance may be baked into the model’s final concepts so heavily you might as well not even bother with SD3 and work on something else.