It was how it was trained, but Gurkenglas is saying that GPT-2 could male a human-like conversation because Turing test transcripts are in the GPT-2 dataset, but it’s conversations between humans in the GPT-2 dataset that would make possible GPT-2 making human-like conversations and thus potentially passing the Turing Test.
I think Pattern thought you meant “GPT-2 was trained on sentences generated by dumb programs.”.
I expect that a sufficiently better GPT-2 could deduce how to pass a Turing test without a large number of Turing test transcripts in its training set, just by having the prompt say “What follows is the transcript of a passing Turing test.” and having someone on the internet talk about what a Turing test is. If you want to make it extra easy, let the first two replies to the judge be generated by a human.
It was how it was trained, but Gurkenglas is saying that GPT-2 could male a human-like conversation because Turing test transcripts are in the GPT-2 dataset, but it’s conversations between humans in the GPT-2 dataset that would make possible GPT-2 making human-like conversations and thus potentially passing the Turing Test.
I think Pattern thought you meant “GPT-2 was trained on sentences generated by dumb programs.”.
I expect that a sufficiently better GPT-2 could deduce how to pass a Turing test without a large number of Turing test transcripts in its training set, just by having the prompt say “What follows is the transcript of a passing Turing test.” and having someone on the internet talk about what a Turing test is. If you want to make it extra easy, let the first two replies to the judge be generated by a human.
My point is that it would be a better idea to put as prompt “What follows is a transcript of a conversation between two people:”.
That makes sense.
I doubt it, but it sure sounds like a good idea to develop a theory of what prompts are more useful/safe.