Yeah, I see what you mean. But even if it gets the correct letter shapes, it’s still nonsense right? It’s writing as a visual feel, not anything actually written. Maybe in a sense writing is more difficult to do for image generators—a piece of paper with written text is much more information dense than, say, a patch of grass—especially if it’s integrated in a scene.
Low-res images like 256px images are a pretty hard source to try to learn language from! Which is not to say that no mad scientists won’t try it anyway, but you certainly can’t expect GPT-3 fluency from an image model which sees language as primarily strip mall signage or the occasional axis label, all downscaled to near unreadability. (Note that ‘captions’ of images will often, or usually, not transcribe all text visible in said image.) You might think that they would start from a pretrained small GPT-3 or something (perhaps run an OCR NN to transcribe all text in the images and append that to the caption), but they don’t seem to (at least, checking the DALL-E 2 & GLIDE paper indicates no such use and the ‘text encoder’ appears to be a diffusion model and so couldn’t be a pretrained GPT-3?). Oh well. You can’t do everything, you know.
Yeah, I see what you mean. But even if it gets the correct letter shapes, it’s still nonsense right? It’s writing as a visual feel, not anything actually written. Maybe in a sense writing is more difficult to do for image generators—a piece of paper with written text is much more information dense than, say, a patch of grass—especially if it’s integrated in a scene.
Low-res images like 256px images are a pretty hard source to try to learn language from! Which is not to say that no mad scientists won’t try it anyway, but you certainly can’t expect GPT-3 fluency from an image model which sees language as primarily strip mall signage or the occasional axis label, all downscaled to near unreadability. (Note that ‘captions’ of images will often, or usually, not transcribe all text visible in said image.) You might think that they would start from a pretrained small GPT-3 or something (perhaps run an OCR NN to transcribe all text in the images and append that to the caption), but they don’t seem to (at least, checking the DALL-E 2 & GLIDE paper indicates no such use and the ‘text encoder’ appears to be a diffusion model and so couldn’t be a pretrained GPT-3?). Oh well. You can’t do everything, you know.