There is somediscussion on the forum about using AI to detect whether or not something is a deepfake, and perhaps some trust in anti-deepfake bots to be better resourced etc. in this arms race. But could we give ourselves a bit of breathing room here?
Could it be incredibly valuable to accelerate desensitization to deepfakes? Or at least make people more aware of them by using humor?
It seems like a real risk that someone eventually creates a convincing and harmful deepfake, a la current President saying, “America has launched nuclear weapons against Russia.” Or vice versa or literally anything bad, with voice and video, that to our eyes is Very Real and Convincingly Terrible.
Should we be subverting people’s expectations by familiarizing them with deepfakes, perhaps best done by example? If you have not seen it already, it seems that the memes of the US Presidents playing computer games (Warning expletives+: US Presidents play Minecraft) is actually a pretty good example of this. On the flip side of this, I also recall seeing a video being shown to an older individual who, despite Biden saying the most heinous things, found it more believable that it was real than a deepfake.
So maybe spamming content for significant figures doing whacky things is effective for updating people’s models for the probability of a deepfake. I was considering further pushing the bounds here by walking the fine line of a ‘real fake’ nuclear assault, a la Biden saying, “My fellow Americans, we have launched nukes against… the Moon.” But that seems unnecessarily too close to the mark.
There could be an info hazard that by demonstrating the capabilities of deepfakes you actually show people that this is a powerful manipulation tool and increase risk of malicious use.
Unsure call to action here, thought I’d share, hope for steelmanning/meta-feedback on post, and just indicate a potentially impactful angle that seems to be working pretty well in some ways...
Desensitizing idea of deepfakes without risk, in a way that may make people more likely to question whether something said is real (could be worth more rigorous study).
Desensitizing Deepfakes
There is some discussion on the forum about using AI to detect whether or not something is a deepfake, and perhaps some trust in anti-deepfake bots to be better resourced etc. in this arms race. But could we give ourselves a bit of breathing room here?
Could it be incredibly valuable to accelerate desensitization to deepfakes? Or at least make people more aware of them by using humor?
It seems like a real risk that someone eventually creates a convincing and harmful deepfake, a la current President saying, “America has launched nuclear weapons against Russia.” Or vice versa or literally anything bad, with voice and video, that to our eyes is Very Real and Convincingly Terrible.
Should we be subverting people’s expectations by familiarizing them with deepfakes, perhaps best done by example? If you have not seen it already, it seems that the memes of the US Presidents playing computer games (Warning expletives+: US Presidents play Minecraft) is actually a pretty good example of this. On the flip side of this, I also recall seeing a video being shown to an older individual who, despite Biden saying the most heinous things, found it more believable that it was real than a deepfake.
So maybe spamming content for significant figures doing whacky things is effective for updating people’s models for the probability of a deepfake. I was considering further pushing the bounds here by walking the fine line of a ‘real fake’ nuclear assault, a la Biden saying, “My fellow Americans, we have launched nukes against… the Moon.” But that seems unnecessarily too close to the mark.
There could be an info hazard that by demonstrating the capabilities of deepfakes you actually show people that this is a powerful manipulation tool and increase risk of malicious use.
Unsure call to action here, thought I’d share, hope for steelmanning/meta-feedback on post, and just indicate a potentially impactful angle that seems to be working pretty well in some ways...
Desensitizing idea of deepfakes without risk, in a way that may make people more likely to question whether something said is real (could be worth more rigorous study).