I often hear about deepfakes—pictures/videos that can be entirely synthesized by a deep learning model and made to look real—and how this could greatly amplify the “fake news” phenomenon and really undermine the ability of the public to actually evaluate evidence.
And this sounds like a well-founded worry, but then I was just thinking, what about Photoshop? That’s existed for over a decade, and for all that time it’s been possible to doctor images to look real. So why should deepfakes be any scarier?
Part of it could be that we can fake videos, not just images, but that can’t be all of it.
I suspect the main reason is that in the future, deepfakes will also be able to fool experts. This does seem like an important threshold.
This raises another question: is it, in fact, impossible to fool experts with Photoshop? Are there fundamental limitations on it that prevent it from being this potent, and this was always understood so people weren’t particularly fearful of it? (FWIW when I learned about Photoshop as a kid I freaked out with Orwellian visions even worse than people have with deepfakes now, and pretty much only relaxed out of conformity. I remain ignorant about the technical details of Photoshop and its capabilities)
But even if deepfakes are bound to cross this threshold (not that it’s a fine line) in a way Photoshop never could, aren’t there also plenty of things which experts have had and do have trouble classifying as real/fake? Wikipedia’s list of hoaxes is extensive, albeit most of those fooled the public rather than experts. But I feel like there are plenty of hoaxes that lasted hundreds of years before being debunked (Shroud of Turin, or maybe fake fossils?).
I guess we’re just used to seeing less hoaxes in modern times. Like, in the past hoaxes abounded, and there often weren’t the proper experts around to debunk them, so probably those times warranted a greater degree of epistemic learned helplessness or something. But since the last century, our forgery-spotting techniques have gotten a lot better while the corresponding forgeries just haven’t kept up, so we just happen to live in a time where the “offense” is relatively weaker than the “defense”, but there’s no particular reason it should stay that way.
I’m really not sure how worried I should be about deepfakes, but having just thought through all that, it does seem like the existence of “evidence” in political discourse is not an all-or-nothing phenomenon. Images/videos will likely come to be trusted less, maybe other things as well if deep learning contributes in other ways to the “offense” more than the “defense”. And maybe things will reach a not-so-much-worse equilibrium. Or maybe not, but the deepfake phenomenon certainly does not seem completely new.
Part of it could be that we can fake videos, not just images, but that can’t be all of it.
There was a handful of news about things like a company being scammed out of a lot of money (the voice of the CEO was faked over a phone). This is a different issue than “the public” being fooled.
I often hear about deepfakes—pictures/videos that can be entirely synthesized by a deep learning model and made to look real—and how this could greatly amplify the “fake news” phenomenon and really undermine the ability of the public to actually evaluate evidence.
And this sounds like a well-founded worry, but then I was just thinking, what about Photoshop? That’s existed for over a decade, and for all that time it’s been possible to doctor images to look real. So why should deepfakes be any scarier?
Part of it could be that we can fake videos, not just images, but that can’t be all of it.
I suspect the main reason is that in the future, deepfakes will also be able to fool experts. This does seem like an important threshold.
This raises another question: is it, in fact, impossible to fool experts with Photoshop? Are there fundamental limitations on it that prevent it from being this potent, and this was always understood so people weren’t particularly fearful of it? (FWIW when I learned about Photoshop as a kid I freaked out with Orwellian visions even worse than people have with deepfakes now, and pretty much only relaxed out of conformity. I remain ignorant about the technical details of Photoshop and its capabilities)
But even if deepfakes are bound to cross this threshold (not that it’s a fine line) in a way Photoshop never could, aren’t there also plenty of things which experts have had and do have trouble classifying as real/fake? Wikipedia’s list of hoaxes is extensive, albeit most of those fooled the public rather than experts. But I feel like there are plenty of hoaxes that lasted hundreds of years before being debunked (Shroud of Turin, or maybe fake fossils?).
I guess we’re just used to seeing less hoaxes in modern times. Like, in the past hoaxes abounded, and there often weren’t the proper experts around to debunk them, so probably those times warranted a greater degree of epistemic learned helplessness or something. But since the last century, our forgery-spotting techniques have gotten a lot better while the corresponding forgeries just haven’t kept up, so we just happen to live in a time where the “offense” is relatively weaker than the “defense”, but there’s no particular reason it should stay that way.
I’m really not sure how worried I should be about deepfakes, but having just thought through all that, it does seem like the existence of “evidence” in political discourse is not an all-or-nothing phenomenon. Images/videos will likely come to be trusted less, maybe other things as well if deep learning contributes in other ways to the “offense” more than the “defense”. And maybe things will reach a not-so-much-worse equilibrium. Or maybe not, but the deepfake phenomenon certainly does not seem completely new.
There was a handful of news about things like a company being scammed out of a lot of money (the voice of the CEO was faked over a phone). This is a different issue than “the public” being fooled.