Is this about recent demos of Hollywood-level image enhancement, and how they’re not discovering what’s in the image, but making stuff up that’s consistent with it? And similar demos with GPT-3, that one might call “text enhancement”?
I wonder when someone investigating a crime will try feeding all the evidence to something like GPT-3 and asking it to continue the sentence “Therefore the guilty person is...” Then they present this as evidence in court.
I wasn’t aware of the image enhancement stuff, but it sounds different from what I’m getting at. If I had to write a moral for this story, I would say “Just because X is all the information you have to go on doesn’t mean that X is enough information to give a high-quality answer to the problem at hand”.
One time where I feel like I see people making this mistake is with the problem of induction. There are those who say “well, if you take the problem of induction too seriously, you can’t know for sure that the sun will rise tomorrow!” and conclude that there must be an issue with the problem of induction, rather than wondering whether they really might not know for sure whether the sun will rise tomorrow.
I (believe that I) saw Scott Alexander make this sort of mistake in one of his recent posts, but I can’t go check because… well, the blog doesn’t exist at the moment. Actually, I heard it through the podcast, which is still available, so I might just listen back to the recent episodes and see if I (1) find the snippet I’m thinking of and (2) still think it’s an instance of this mistake. If condition 1 is met, I’ll come back and edit in a report of condition 2.
I think that’s what I had in mind. One of the “image enhancement” demos takes a heavily pixelated face and gives a high quality image of a face — which may look little like the real face. Another takes the top half of a picture and fills in the bottom half. In both cases it’s just making up something which may be plausible given the input, but no more plausible than any of countless possible extrapolations.
Is this about recent demos of Hollywood-level image enhancement, and how they’re not discovering what’s in the image, but making stuff up that’s consistent with it? And similar demos with GPT-3, that one might call “text enhancement”?
I wonder when someone investigating a crime will try feeding all the evidence to something like GPT-3 and asking it to continue the sentence “Therefore the guilty person is...” Then they present this as evidence in court.
I wasn’t aware of the image enhancement stuff, but it sounds different from what I’m getting at. If I had to write a moral for this story, I would say “Just because X is all the information you have to go on doesn’t mean that X is enough information to give a high-quality answer to the problem at hand”.
One time where I feel like I see people making this mistake is with the problem of induction. There are those who say “well, if you take the problem of induction too seriously, you can’t know for sure that the sun will rise tomorrow!” and conclude that there must be an issue with the problem of induction, rather than wondering whether they really might not know for sure whether the sun will rise tomorrow.
I (believe that I) saw Scott Alexander make this sort of mistake in one of his recent posts, but I can’t go check because… well, the blog doesn’t exist at the moment. Actually, I heard it through the podcast, which is still available, so I might just listen back to the recent episodes and see if I (1) find the snippet I’m thinking of and (2) still think it’s an instance of this mistake. If condition 1 is met, I’ll come back and edit in a report of condition 2.
I think that’s what I had in mind. One of the “image enhancement” demos takes a heavily pixelated face and gives a high quality image of a face — which may look little like the real face. Another takes the top half of a picture and fills in the bottom half. In both cases it’s just making up something which may be plausible given the input, but no more plausible than any of countless possible extrapolations.