I always found that trick by the Cohen brothers a bit distatestful… what were they trying to achieve? Convey that everything is lie and nothing is reliable in this world? Sounds a lot like cheap, teenage year cynicism to me.
And if you give it specific instructions about what you want in the story, it will follow them, though not necessarily in the way you had in mind.
When you ask it for a true story, the story it returns will be true – at least in the cases I’ve checked. Now if you keep probing on one of the true stories it might start making things up, but I haven’t tried to push it.
I always found that trick by the Cohen brothers a bit distatestful… what were they trying to achieve? Convey that everything is lie and nothing is reliable in this world? Sounds a lot like cheap, teenage year cynicism to me.
I have found that ChatGPT responds differently to the following prompts:
Tell me a story.
Tell me a story about a hero.
Tell me a realistic story.
Tell me a true story.
And if you give it specific instructions about what you want in the story, it will follow them, though not necessarily in the way you had in mind.
When you ask it for a true story, the story it returns will be true – at least in the cases I’ve checked. Now if you keep probing on one of the true stories it might start making things up, but I haven’t tried to push it.