Low-res images like 256px images are a pretty hard source to try to learn language from! Which is not to say that no mad scientists won’t try it anyway, but you certainly can’t expect GPT-3 fluency from an image model which sees language as primarily strip mall signage or the occasional axis label, all downscaled to near unreadability. (Note that ‘captions’ of images will often, or usually, not transcribe all text visible in said image.) You might think that they would start from a pretrained small GPT-3 or something (perhaps run an OCR NN to transcribe all text in the images and append that to the caption), but they don’t seem to (at least, checking the DALL-E 2 & GLIDE paper indicates no such use and the ‘text encoder’ appears to be a diffusion model and so couldn’t be a pretrained GPT-3?). Oh well. You can’t do everything, you know.
Low-res images like 256px images are a pretty hard source to try to learn language from! Which is not to say that no mad scientists won’t try it anyway, but you certainly can’t expect GPT-3 fluency from an image model which sees language as primarily strip mall signage or the occasional axis label, all downscaled to near unreadability. (Note that ‘captions’ of images will often, or usually, not transcribe all text visible in said image.) You might think that they would start from a pretrained small GPT-3 or something (perhaps run an OCR NN to transcribe all text in the images and append that to the caption), but they don’t seem to (at least, checking the DALL-E 2 & GLIDE paper indicates no such use and the ‘text encoder’ appears to be a diffusion model and so couldn’t be a pretrained GPT-3?). Oh well. You can’t do everything, you know.