As you said, this seems like a pretty bad argument.
Something is going on between the {user instruction} ….. {instruction to the image model}. But we don’t even know if it’s in the LLM. It could be there’s dumb manual “if” parsing statements that act differently depending on periods, etc, etc. It could be that there are really dumb instructions given to the LLM that creates instructions for the language model, as there were for Gemini. So, yeah.
I am looking for a discussion of evidence that the LLMs internal “true” motivation or reasoning system is very different from human, despite the human output, and that in outlying environmental conditions, very different from the training environment, it will behave very differently.
I wouldn’t say that’s exactly best argument but for example
As you said, this seems like a pretty bad argument.
Something is going on between the {user instruction} ….. {instruction to the image model}. But we don’t even know if it’s in the LLM. It could be there’s dumb manual “if” parsing statements that act differently depending on periods, etc, etc. It could be that there are really dumb instructions given to the LLM that creates instructions for the language model, as there were for Gemini. So, yeah.
That is good, thank you.
That seems to be an argument for something more than random noise going on, but not an argument for ‘LLMs are shuggoths’?
Definition given in post:
I think my example counts.