I wonder, could this be solved by a good prompt? I mean, most humans are bad writers, and LLMs are taught on their texts, so they should be bad writers by default.
But sometimes a few words in the prompt change a lot, even if from human perspective that doesn’t make much sense. For me, this is all black magic, but I would not be surprised if starting the prompt with “you are a great writer; you write concisely and make your points clear” had a dramatic impact on the outcome.
(Kinda like generated images are sometimes better if you include “realistic” and “correct number of fingers” in the prompt. At least I think so; this is what I have seen other people do in prompts, but I didn’t do an A/B test to verify that it really improves the outcome.)
Even more, maybe we could put (a drastically condensed version of) the Sequences in the prompt, to remind the LLM to avoid specific biases, to reason step by step rather than make the conclusion first, etc. Yeah, if you tell me this wouldn’t work, I will trust your experience, but I see no a-priori reasons why not.
EDIT:
I see other people already experimented with prompts, and it improved the results, but not sufficiently.
I wonder, could this be solved by a good prompt? I mean, most humans are bad writers, and LLMs are taught on their texts, so they should be bad writers by default.
But sometimes a few words in the prompt change a lot, even if from human perspective that doesn’t make much sense. For me, this is all black magic, but I would not be surprised if starting the prompt with “you are a great writer; you write concisely and make your points clear” had a dramatic impact on the outcome.
(Kinda like generated images are sometimes better if you include “realistic” and “correct number of fingers” in the prompt. At least I think so; this is what I have seen other people do in prompts, but I didn’t do an A/B test to verify that it really improves the outcome.)
Even more, maybe we could put (a drastically condensed version of) the Sequences in the prompt, to remind the LLM to avoid specific biases, to reason step by step rather than make the conclusion first, etc. Yeah, if you tell me this wouldn’t work, I will trust your experience, but I see no a-priori reasons why not.
EDIT:
I see other people already experimented with prompts, and it improved the results, but not sufficiently.
I think with the right prompting techniques you can indeed do better. I might post something later today.