It’d be nice if LLM providers offered different ‘flavors’ of their LLMs. Prompting with a meta-request (system prompt) to act as an analytical scientist rather than an obsequious servant helps, but only partially. I imagine that a proper fine-tuned-from-base-model attempt at creating a fundamentally different personality would give a more satisfyingly coherent and stable result. I find that longer conversations tend to see the LLM lapsing back into its default habits, and becoming increasingly sycophantic and obsequious, requiring me to re-prompt it to be more objective and rational.
Seems like this would be a relatively cheap product variation for the LLM companies to produce.
[Edit: soon after they posted this, Anthropic released exactly this! Claude got ‘flavors’, and I find the formal style much more satisfying. I also use this “system prompt” in my preferences:
“When a question or request seems underspecified, ask clarifying questions. Avoid sycophancy or flattery. If I seem wrong, tell me so.”]
It’d be nice if LLM providers offered different ‘flavors’ of their LLMs. Prompting with a meta-request (system prompt) to act as an analytical scientist rather than an obsequious servant helps, but only partially. I imagine that a proper fine-tuned-from-base-model attempt at creating a fundamentally different personality would give a more satisfyingly coherent and stable result. I find that longer conversations tend to see the LLM lapsing back into its default habits, and becoming increasingly sycophantic and obsequious, requiring me to re-prompt it to be more objective and rational.
Seems like this would be a relatively cheap product variation for the LLM companies to produce.
[Edit: soon after they posted this, Anthropic released exactly this! Claude got ‘flavors’, and I find the formal style much more satisfying. I also use this “system prompt” in my preferences:
“When a question or request seems underspecified, ask clarifying questions. Avoid sycophancy or flattery. If I seem wrong, tell me so.”]