Going a little further, I’m actually not sure that “fooling” GPT-3 is quite the best framing. GPT-3 isn’t playing a game where it’s trying to guess the scenario based on trustworthy textual cues and then describing the rest of it. That’s a goal we’re imposing upon it.
We might instead say that we were attempting to generate a GPT-3 “Yelp complaints about bees in a restaurant” based on a minimal cue, and did not succeed in doing so.
Going a little further, I’m actually not sure that “fooling” GPT-3 is quite the best framing. GPT-3 isn’t playing a game where it’s trying to guess the scenario based on trustworthy textual cues and then describing the rest of it. That’s a goal we’re imposing upon it.
We might instead say that we were attempting to generate a GPT-3 “Yelp complaints about bees in a restaurant” based on a minimal cue, and did not succeed in doing so.