You didn’t provide any evidence that it’s easier to create almost-effortless development of NSFW images of specific people for blackmail or harassment or even that it’s easier to create them with the new technology than with older technology.
Why make your argument without providing evidence for that claim?
One of your sources even explicitly argues that you are wrong:
Right now, the results are still much too rough to even begin to trick anyone into thinking they’re real snapshots of nudes
Slander was always easy, and you can likely cite a lot of cases where it harmed people. What should we conclude from LessWrong allowing you to post slander? That we someone need to delete your post?
The Vice article came out on August 24th. That was 5 days after the SD leak and 2 days after its official open-source release. The claim that it made that SD couldn’t “begin to trick anyone into thinking they’re real snapshots of nudes” did not stand the test of time. We linked the vice article in context of the discussion of deepfake porn in general, not on the specific photorealistic capabilities of SD.
And bear in mind that new updates, GUIs, APIs, capabilities, etc are arriving almost daily.
I will not link NSFW examples. But I have seen them. They are just as realistic. Others have agreed. I’ve gotten several people banned from social media platforms after reporting them.
The key argument is that StableDiffusion is more accessible, meaning more people can create deepfakes with fewer images of their subject and no specialized skills. From above (links removed):
“The unique danger posed by today’s text-to-image models stems from how they can make harmful, non-consensual content production much easier than before, particularly via inpainting and outpainting, which allows a user to interactively build realistic synthetic images from natural ones, dreambooth, or other easily used tools, which allow for fine-tuning on as few as 3-5 examples of a particular subject (e.g. a specific person). More of which are rapidly becoming available following the open-sourcing of Stable Diffusion. It is clear that today’s text-to-image models have uniquely distinct capabilities from methods like Photoshop, RNNs trained on specific individuals or, “nudifying” apps. These previous methods all require a large amount of subject-specific data, human time, and/or human skill. And no, you don’t need to know how to code to interactively use Stable Diffusion, uncensored and unfiltered, including in/outpainting and dreambooth.”
If what you’re questioning is the basic ability of StableDiffusion to generate deepfakes, here [1] is an NSFW link to www.mrdeepfakes.com who says, “having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.” He then provides links to NSFW images generated by StableDiffusion, including deepfakes of celebrities. This is apparently facilitated by the LAION-5B dataset which Stability AI admits has about 3% unsafe images and which he claims has “TONS of captioned porn images in it”.
“having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.”
Completely, custom porn is not necessarily porn that actually looks like existing people in a way that would fool a critical observer.
More importantly, the person who posted this is not someone without specialized skills. Your claim is that basically anyone can just use the technology at present to create deepfakes. There might be a future where it’s actually easy for someone without skills to create deepfakes but that link doesn’t show that this future is here at present.
With previous technology, you create a deepfake porno image by taking a photo of someone, cropping out the head, and then putting that head into a porn image. You don’t need countless images of them to do so. For your charge to be true, the present Stable Diffusion-based tech would have to be either much easier than existing photoshop based methods or produce more convincing images than low skill photoshop deepfakes.
The thread in the forum demonstrates that neither of these is the case at present.
You didn’t provide any evidence that it’s easier to create almost-effortless development of NSFW images of specific people for blackmail or harassment or even that it’s easier to create them with the new technology than with older technology.
Why make your argument without providing evidence for that claim?
One of your sources even explicitly argues that you are wrong:
Slander was always easy, and you can likely cite a lot of cases where it harmed people. What should we conclude from LessWrong allowing you to post slander? That we someone need to delete your post?
The Vice article came out on August 24th. That was 5 days after the SD leak and 2 days after its official open-source release. The claim that it made that SD couldn’t “begin to trick anyone into thinking they’re real snapshots of nudes” did not stand the test of time. We linked the vice article in context of the discussion of deepfake porn in general, not on the specific photorealistic capabilities of SD.
Speaking of which, dreambooth does allow for this. See this SFW example. This is the type of thing would not be possible with older methods. https://www.reddit.com/r/StableDiffusion/comments/y1xgx0/dreambooth_completely_blows_my_mind_first_attempt/
And bear in mind that new updates, GUIs, APIs, capabilities, etc are arriving almost daily.
I will not link NSFW examples. But I have seen them. They are just as realistic. Others have agreed. I’ve gotten several people banned from social media platforms after reporting them.
The key argument is that StableDiffusion is more accessible, meaning more people can create deepfakes with fewer images of their subject and no specialized skills. From above (links removed):
“The unique danger posed by today’s text-to-image models stems from how they can make harmful, non-consensual content production much easier than before, particularly via inpainting and outpainting, which allows a user to interactively build realistic synthetic images from natural ones, dreambooth, or other easily used tools, which allow for fine-tuning on as few as 3-5 examples of a particular subject (e.g. a specific person). More of which are rapidly becoming available following the open-sourcing of Stable Diffusion. It is clear that today’s text-to-image models have uniquely distinct capabilities from methods like Photoshop, RNNs trained on specific individuals or, “nudifying” apps. These previous methods all require a large amount of subject-specific data, human time, and/or human skill. And no, you don’t need to know how to code to interactively use Stable Diffusion, uncensored and unfiltered, including in/outpainting and dreambooth.”
If what you’re questioning is the basic ability of StableDiffusion to generate deepfakes, here [1] is an NSFW link to www.mrdeepfakes.com who says, “having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.” He then provides links to NSFW images generated by StableDiffusion, including deepfakes of celebrities. This is apparently facilitated by the LAION-5B dataset which Stability AI admits has about 3% unsafe images and which he claims has “TONS of captioned porn images in it”.
[1] Warning, NSFW: https://mrdeepfakes.com/forums/threads/guide-using-stable-diffusion-to-generate-custom-nsfw-images.10289/
Completely, custom porn is not necessarily porn that actually looks like existing people in a way that would fool a critical observer.
More importantly, the person who posted this is not someone without specialized skills. Your claim is that basically anyone can just use the technology at present to create deepfakes. There might be a future where it’s actually easy for someone without skills to create deepfakes but that link doesn’t show that this future is here at present.
With previous technology, you create a deepfake porno image by taking a photo of someone, cropping out the head, and then putting that head into a porn image. You don’t need countless images of them to do so. For your charge to be true, the present Stable Diffusion-based tech would have to be either much easier than existing photoshop based methods or produce more convincing images than low skill photoshop deepfakes.
The thread in the forum demonstrates that neither of these is the case at present.