If you’re a little loose about the level of coherence required, 4o-mini managed it with several revisions and some spare tokens to (in theory, but tbh a lot of this is guesswork) give it spare compute for the hard part. (Share link, hopefully.) Final poem:
That’s interesting. I admit I’ve never really tried the ‘spare tokens’ trick seriously on any LLMs, but if it can get the S-poem in 3 samples with the spare token trick, maybe I’ve underestimated it. (I wonder how it would stack with the o1-preview/mini chain-of-thought? The example transcripts are rather verbose, so maybe those provide all of the ‘spare token’ effect by default.)
If you’re a little loose about the level of coherence required, 4o-mini managed it with several revisions and some spare tokens to (in theory, but tbh a lot of this is guesswork) give it spare compute for the hard part. (Share link, hopefully.)
Final poem:
That’s interesting. I admit I’ve never really tried the ‘spare tokens’ trick seriously on any LLMs, but if it can get the S-poem in 3 samples with the spare token trick, maybe I’ve underestimated it. (I wonder how it would stack with the o1-preview/mini chain-of-thought? The example transcripts are rather verbose, so maybe those provide all of the ‘spare token’ effect by default.)