Since OpenAI optimized it’s output for things as you suggested in your article (dresses, animals) I believe hidden in the depths of the AI, it can pull pictures such as Kyubey but requires an un-optimized input (as in broken English or maybe in Japanese for this specific one)
So basically telling an alien what to generate in their own language… And the problem is that we don’t know what this is with the information currently available (with tests from DALL-E 1)
Since OpenAI optimized it’s output for things as you suggested in your article (dresses, animals) I believe hidden in the depths of the AI, it can pull pictures such as Kyubey but requires an un-optimized input (as in broken English or maybe in Japanese for this specific one)
So basically telling an alien what to generate in their own language… And the problem is that we don’t know what this is with the information currently available (with tests from DALL-E 1)