I think Pattern thought you meant “GPT-2 was trained on sentences generated by dumb programs.”.
I expect that a sufficiently better GPT-2 could deduce how to pass a Turing test without a large number of Turing test transcripts in its training set, just by having the prompt say “What follows is the transcript of a passing Turing test.” and having someone on the internet talk about what a Turing test is. If you want to make it extra easy, let the first two replies to the judge be generated by a human.
I think Pattern thought you meant “GPT-2 was trained on sentences generated by dumb programs.”.
I expect that a sufficiently better GPT-2 could deduce how to pass a Turing test without a large number of Turing test transcripts in its training set, just by having the prompt say “What follows is the transcript of a passing Turing test.” and having someone on the internet talk about what a Turing test is. If you want to make it extra easy, let the first two replies to the judge be generated by a human.
My point is that it would be a better idea to put as prompt “What follows is a transcript of a conversation between two people:”.
That makes sense.
I doubt it, but it sure sounds like a good idea to develop a theory of what prompts are more useful/safe.