There are APIs. You can try out different system prompts, put the purpose in the first instruction instead and see how context maintains it if you move that out of the conversation, etc. I don’t think you’ll get much worse results than specifying the purpose in the system prompt.
Yes, my understanding is that the system prompt isn’t really priviledged in any way by the LLM itself, just in the scaffolding around it.
But regardless, this sounds to me less like maintaining or forming a sense of purpose, and more like retrieving information from the context window.
That is, if the LLM has previously seen (through system prompt or first instruction or whatever) “your purpose is to assist the user”, and later sees “what is your purpose?” an answer saying “my purpose is to assist the user” doesn’t seem like evidence of purposefulness. Same if you run the exercise with “flurbles are purple”, and later “what color are flurbles?” with the answer “purple”.
There are APIs. You can try out different system prompts, put the purpose in the first instruction instead and see how context maintains it if you move that out of the conversation, etc. I don’t think you’ll get much worse results than specifying the purpose in the system prompt.
Yes, my understanding is that the system prompt isn’t really priviledged in any way by the LLM itself, just in the scaffolding around it.
But regardless, this sounds to me less like maintaining or forming a sense of purpose, and more like retrieving information from the context window.
That is, if the LLM has previously seen (through system prompt or first instruction or whatever) “your purpose is to assist the user”, and later sees “what is your purpose?” an answer saying “my purpose is to assist the user” doesn’t seem like evidence of purposefulness. Same if you run the exercise with “flurbles are purple”, and later “what color are flurbles?” with the answer “purple”.