You didn’t even try to prompt it to write in a style you like more.
For all their virtues, fine-tuned language models are pretty bad at imitating the style of certain writers. I’ve instructed them to write like well-known writers, or given them medium amounts of text I’ve written, but they almost always fall back on that horribly dreadful HR-speak college-essayist style. Bleh.
A good example of how incredibly incorrigible & mode-collapsed tuned model style can be is this 2023 poetry paper: even with 17 non-rhyming Walt Whitman poems in the prompt to few-shot it, ChatGPT still rhymed. (It’s gotten better, and now even passes my old “write a non-rhyming poem” test, but nevertheless, an alarming instance.)
Good to know. That’s been my experience too—but I’ve also seen them adopt a dramatically different style sometimes when the prompt describes the style rather than using a name or writing samples. So IDK because I haven’t tried much or even googled it. But neither apparently has the author of the post, which it seems like they should bother before making broad claims.
I haven’t had any more success myself in my few attempts, but saying “I told it to do this and it sucked”′ is definitely a very bad way to evaluate LLM capabilities. Proper prompting often produces dramatically better results. So the post should be titled “don’t let LLMs think for you or write for you unless you can get better results than baseline unprompted style” or something.
If the author had bothered to read up on what type of writing they could do with lots of work at prompting, that would be a different story and a better post.
You may be right, but this is missing a major point so I didn’t continue. You didn’t even try to prompt it to write in a style you like more.
As for letting it come up with the logic, of course you shouldn’t do that; humans still reason better. At least very well educated humans do.
OTOH, you should ask it for help in thinking through domains you’re not expert in; that will improve your opinions and writing.
For all their virtues, fine-tuned language models are pretty bad at imitating the style of certain writers. I’ve instructed them to write like well-known writers, or given them medium amounts of text I’ve written, but they almost always fall back on that horribly dreadful HR-speak college-essayist style. Bleh.
A good example of how incredibly incorrigible & mode-collapsed tuned model style can be is this 2023 poetry paper: even with 17 non-rhyming Walt Whitman poems in the prompt to few-shot it, ChatGPT still rhymed. (It’s gotten better, and now even passes my old “write a non-rhyming poem” test, but nevertheless, an alarming instance.)
Good to know. That’s been my experience too—but I’ve also seen them adopt a dramatically different style sometimes when the prompt describes the style rather than using a name or writing samples. So IDK because I haven’t tried much or even googled it. But neither apparently has the author of the post, which it seems like they should bother before making broad claims.
I haven’t had any more success myself in my few attempts, but saying “I told it to do this and it sucked”′ is definitely a very bad way to evaluate LLM capabilities. Proper prompting often produces dramatically better results. So the post should be titled “don’t let LLMs think for you or write for you unless you can get better results than baseline unprompted style” or something.
If the author had bothered to read up on what type of writing they could do with lots of work at prompting, that would be a different story and a better post.