Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.
I think the world having access to deepfakes, and deepfake-porn technology in particular, is net bad. However, the stakes are small compared to the upcoming stakes with superintelligence, which has a high probability of killing literally everyone.
If translated into legislation, I think what this does is put turnkey-hosted deepfake porn generation, as well as pre-tuned-for-porn model weights, into a place very similar to where piracy is today. Which is to say: The Pirate Bay is illegal, wget is not, and the legal distinction is the advertised purpose.
(Where non-porn deepfakes are concerned, I expect them to try a bit harder at watermarking, still fail, and successfully defend themselves legally on the basis that they tried.)
The analogy to piracy goes a little further. If laws are passed, deepfakes will be a little less prevalent than they would otherwise be, there won’t be above-board businesses around it… and there will still be lots of it. I don’t think there-being-lots-of-it can be prevented by any feasible means. The benefit of this will be the creation of common knowledge that the US federal government’s current toolkit is not capable of holding back AI development and access, even when it wants to.
I would much rather they learn that now, when there’s still a nonzero chance of building regulatory tools that would function, rather than later.
I’m reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.
I definitely appreciate that someone signing this writes this reasoning publicly. I think it’s not crazy to think that it will be good to happen. I feel like it’s a bit disingenuous to sign the letter for this reason, but I’m not certain.
I’m reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.
I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it’s going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I’m uncertain of the sign of its effect on rates of actual child abuse).
There’s an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I’m putting here.
Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.
I think the world having access to deepfakes, and deepfake-porn technology in particular, is net bad. However, the stakes are small compared to the upcoming stakes with superintelligence, which has a high probability of killing literally everyone.
If translated into legislation, I think what this does is put turnkey-hosted deepfake porn generation, as well as pre-tuned-for-porn model weights, into a place very similar to where piracy is today. Which is to say: The Pirate Bay is illegal, wget is not, and the legal distinction is the advertised purpose.
(Where non-porn deepfakes are concerned, I expect them to try a bit harder at watermarking, still fail, and successfully defend themselves legally on the basis that they tried.)
The analogy to piracy goes a little further. If laws are passed, deepfakes will be a little less prevalent than they would otherwise be, there won’t be above-board businesses around it… and there will still be lots of it. I don’t think there-being-lots-of-it can be prevented by any feasible means. The benefit of this will be the creation of common knowledge that the US federal government’s current toolkit is not capable of holding back AI development and access, even when it wants to.
I would much rather they learn that now, when there’s still a nonzero chance of building regulatory tools that would function, rather than later.
I’m reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.
I definitely appreciate that someone signing this writes this reasoning publicly. I think it’s not crazy to think that it will be good to happen. I feel like it’s a bit disingenuous to sign the letter for this reason, but I’m not certain.
I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it’s going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I’m uncertain of the sign of its effect on rates of actual child abuse).