The key argument is that StableDiffusion is more accessible, meaning more people can create deepfakes with fewer images of their subject and no specialized skills. From above (links removed):
“The unique danger posed by today’s text-to-image models stems from how they can make harmful, non-consensual content production much easier than before, particularly via inpainting and outpainting, which allows a user to interactively build realistic synthetic images from natural ones, dreambooth, or other easily used tools, which allow for fine-tuning on as few as 3-5 examples of a particular subject (e.g. a specific person). More of which are rapidly becoming available following the open-sourcing of Stable Diffusion. It is clear that today’s text-to-image models have uniquely distinct capabilities from methods like Photoshop, RNNs trained on specific individuals or, “nudifying” apps. These previous methods all require a large amount of subject-specific data, human time, and/or human skill. And no, you don’t need to know how to code to interactively use Stable Diffusion, uncensored and unfiltered, including in/outpainting and dreambooth.”
If what you’re questioning is the basic ability of StableDiffusion to generate deepfakes, here [1] is an NSFW link to www.mrdeepfakes.com who says, “having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.” He then provides links to NSFW images generated by StableDiffusion, including deepfakes of celebrities. This is apparently facilitated by the LAION-5B dataset which Stability AI admits has about 3% unsafe images and which he claims has “TONS of captioned porn images in it”.
“having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.”
Completely, custom porn is not necessarily porn that actually looks like existing people in a way that would fool a critical observer.
More importantly, the person who posted this is not someone without specialized skills. Your claim is that basically anyone can just use the technology at present to create deepfakes. There might be a future where it’s actually easy for someone without skills to create deepfakes but that link doesn’t show that this future is here at present.
With previous technology, you create a deepfake porno image by taking a photo of someone, cropping out the head, and then putting that head into a porn image. You don’t need countless images of them to do so. For your charge to be true, the present Stable Diffusion-based tech would have to be either much easier than existing photoshop based methods or produce more convincing images than low skill photoshop deepfakes.
The thread in the forum demonstrates that neither of these is the case at present.
The key argument is that StableDiffusion is more accessible, meaning more people can create deepfakes with fewer images of their subject and no specialized skills. From above (links removed):
“The unique danger posed by today’s text-to-image models stems from how they can make harmful, non-consensual content production much easier than before, particularly via inpainting and outpainting, which allows a user to interactively build realistic synthetic images from natural ones, dreambooth, or other easily used tools, which allow for fine-tuning on as few as 3-5 examples of a particular subject (e.g. a specific person). More of which are rapidly becoming available following the open-sourcing of Stable Diffusion. It is clear that today’s text-to-image models have uniquely distinct capabilities from methods like Photoshop, RNNs trained on specific individuals or, “nudifying” apps. These previous methods all require a large amount of subject-specific data, human time, and/or human skill. And no, you don’t need to know how to code to interactively use Stable Diffusion, uncensored and unfiltered, including in/outpainting and dreambooth.”
If what you’re questioning is the basic ability of StableDiffusion to generate deepfakes, here [1] is an NSFW link to www.mrdeepfakes.com who says, “having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.” He then provides links to NSFW images generated by StableDiffusion, including deepfakes of celebrities. This is apparently facilitated by the LAION-5B dataset which Stability AI admits has about 3% unsafe images and which he claims has “TONS of captioned porn images in it”.
[1] Warning, NSFW: https://mrdeepfakes.com/forums/threads/guide-using-stable-diffusion-to-generate-custom-nsfw-images.10289/
Completely, custom porn is not necessarily porn that actually looks like existing people in a way that would fool a critical observer.
More importantly, the person who posted this is not someone without specialized skills. Your claim is that basically anyone can just use the technology at present to create deepfakes. There might be a future where it’s actually easy for someone without skills to create deepfakes but that link doesn’t show that this future is here at present.
With previous technology, you create a deepfake porno image by taking a photo of someone, cropping out the head, and then putting that head into a porn image. You don’t need countless images of them to do so. For your charge to be true, the present Stable Diffusion-based tech would have to be either much easier than existing photoshop based methods or produce more convincing images than low skill photoshop deepfakes.
The thread in the forum demonstrates that neither of these is the case at present.