As you said, this seems like a pretty bad argument.
Something is going on between the {user instruction} ….. {instruction to the image model}. But we don’t even know if it’s in the LLM. It could be there’s dumb manual “if” parsing statements that act differently depending on periods, etc, etc. It could be that there are really dumb instructions given to the LLM that creates instructions for the language model, as there were for Gemini. So, yeah.
As you said, this seems like a pretty bad argument.
Something is going on between the {user instruction} ….. {instruction to the image model}. But we don’t even know if it’s in the LLM. It could be there’s dumb manual “if” parsing statements that act differently depending on periods, etc, etc. It could be that there are really dumb instructions given to the LLM that creates instructions for the language model, as there were for Gemini. So, yeah.