That’s the exact opposite impression I got from this new segment. In what world is confusing “right” and “left” a demonstration of reasoning over mere association? How much more wrong could GPT-3 have gotten the answer? “Turning forward”? No, that wouldn’t appear in the corpus.
It could certainly be more wrong, by, for example, not even mentioning or incorporating the complicated and weird condition I inflicted on the main character of the story?
The reasoning doesn’t have to do any work of locating the hypothesis because you’re accepting vague answers and frequent wrong answers.
I noted all of the rerolls in the post. Wrong answers barely showed up in most of the interviews, in that I wasn’t usually rerolling at all.
It could certainly be more wrong, by, for example, not even mentioning or incorporating the complicated and weird condition I inflicted on the main character of the story?
I noted all of the rerolls in the post. Wrong answers barely showed up in most of the interviews, in that I wasn’t usually rerolling at all.