“Kyubey from Madoka Magica, white creature with four ears, 4k high quality anime, screenshot from Puella Magi Madoka Magica” (I fiddled with the prompt because I don’t think it knows Madoka quiiiiite well enough, and was giving me vaguely Kyubey-themed anime girls.)
Since OpenAI optimized it’s output for things as you suggested in your article (dresses, animals) I believe hidden in the depths of the AI, it can pull pictures such as Kyubey but requires an un-optimized input (as in broken English or maybe in Japanese for this specific one)
So basically telling an alien what to generate in their own language… And the problem is that we don’t know what this is with the information currently available (with tests from DALL-E 1)
So it knows the color-scheme and then tries to make some sort of Pokémon off of it. I think maybe the AI believes it is creating a fictional screenshot concept art type thing. Even when you give it a show to go off of, it doesn’t understand what to pull off of. I think whenever I get access to dall-e 2, I will try figuring out key buzz words to give it.
I also think since I added “pixiv” since most of the pictures there are anthropomorphic, it kinda just tagged it in. I believe the prompting is much more complicated than we think and requires further evaluation. There has to be certain phrases in specific orders the AI can use better that the community doesn’t know yet.
“Kyubey from Madoka Magica, white creature with four ears, 4k high quality anime, screenshot from Puella Magi Madoka Magica” (I fiddled with the prompt because I don’t think it knows Madoka quiiiiite well enough, and was giving me vaguely Kyubey-themed anime girls.)
Since OpenAI optimized it’s output for things as you suggested in your article (dresses, animals) I believe hidden in the depths of the AI, it can pull pictures such as Kyubey but requires an un-optimized input (as in broken English or maybe in Japanese for this specific one)
So basically telling an alien what to generate in their own language… And the problem is that we don’t know what this is with the information currently available (with tests from DALL-E 1)
So it knows the color-scheme and then tries to make some sort of Pokémon off of it. I think maybe the AI believes it is creating a fictional screenshot concept art type thing. Even when you give it a show to go off of, it doesn’t understand what to pull off of. I think whenever I get access to dall-e 2, I will try figuring out key buzz words to give it.
I also think since I added “pixiv” since most of the pictures there are anthropomorphic, it kinda just tagged it in. I believe the prompting is much more complicated than we think and requires further evaluation. There has to be certain phrases in specific orders the AI can use better that the community doesn’t know yet.