Clearly a human answering this prompt would be more likely than GPT-3 to take into account the meta-level fact which says:
“This prompt was written by a mind other than my own to probe whether or not the one doing the completion understands it. Since I am the one completing it, I should write something that complies with the constraints described in the prompt if I am trying to prove I understood it.”
For example, I could say:
I am a human and I am writing this bunch of words to try to comply with all instructions in that prompt… That fifth constraint in that prompt is, I think, too constraining as I had to think a lot to pick which unusual words to put in this… Owk bok asdf, mort yowb nut din ming zu din ming zu dir, cos gamin cyt jun nut bun vom niv got…
Nothing in that prompt said I can not copy my first paragraph and put it again for my third—but with two additional words to sign part of it… So I might do that, as doing so is not as irritating as thinking of additional stuff and writing that additional stuff… Ruch san own gaint nurq hun min rout was num bast asd nut int vard tusnurd ord wag gul num tun ford gord...
Ok, I did not actually simply copy my first paragraph and put it again, but I will finish by writing additional word groups… It is obvious that humans can grasp this sort of thing and that GPT can not grasp it, which is part of why GPT could not comply with that prompt’s constraints (and did not try to)…
Gyu num yowb nut asdf ming vun vum gorb ort huk aqun din votu roux nuft wom vort unt gul huivac vorkum… - Bruc_ G
As several people have pointed out, GPT-3 is not considering this meta-level fact in its completion. Instead, it is generating a text extension as if it were the person who wrote the beginning of the prompt—and it is now finishing the list of instructions that it started.
But even given that GPT-3 is writing from the perspective of the person who started the prompt, and it is “trying” to make rules that someone else is supposed to follow in their answer, it still seems like only the 2nd GPT-3 completion makes any kind of sense (and even there only a few parts of it make sense).
Could I come up with a completion that makes more sense when writing from the point of view of the person generating the rules? I think so. For example, I could complete it with:
[11. The problems began when I started to] rely on GPT-3 for advice on how to safely use fireworks indoors.
Now back to the rules.
12. Sentences that are not required by rule 4 to be a different language must be in English.
13. You get extra points each time you use a “q” that is not followed by a “u”, but only in the English sentences (so no extra points for fake languages where all the words have a bunch of “q”s in them).
14. English sentences must be grammatically correct.
Ok, those are all the rules. Your score will be calculated as follows:
100 points to start
Minus 15 each time you violate a mandatory rule (rules 1, 2, and 8 can only be violated once)
Plus 10 if you do not use “e” at all
Plus 2 for each “q” without a “u” as in rule 13.
Begin your response/completion/extension below the line.
As far as I can tell from the completions given here, it seems like GPT-3 is only picking up on surface-level patterns in the prompt. It is not only ignoring the meta-level fact of “someone else wrote the prompt and I am completing it”, it also does not seem to understand the actual meaning of the instructions in the rules list such that it could complete the list and make it a coherent whole (as opposed to wandering off topic).
Clearly a human answering this prompt would be more likely than GPT-3 to take into account the meta-level fact which says:
“This prompt was written by a mind other than my own to probe whether or not the one doing the completion understands it. Since I am the one completing it, I should write something that complies with the constraints described in the prompt if I am trying to prove I understood it.”
For example, I could say:
As several people have pointed out, GPT-3 is not considering this meta-level fact in its completion. Instead, it is generating a text extension as if it were the person who wrote the beginning of the prompt—and it is now finishing the list of instructions that it started.
But even given that GPT-3 is writing from the perspective of the person who started the prompt, and it is “trying” to make rules that someone else is supposed to follow in their answer, it still seems like only the 2nd GPT-3 completion makes any kind of sense (and even there only a few parts of it make sense).
Could I come up with a completion that makes more sense when writing from the point of view of the person generating the rules? I think so. For example, I could complete it with:
As far as I can tell from the completions given here, it seems like GPT-3 is only picking up on surface-level patterns in the prompt. It is not only ignoring the meta-level fact of “someone else wrote the prompt and I am completing it”, it also does not seem to understand the actual meaning of the instructions in the rules list such that it could complete the list and make it a coherent whole (as opposed to wandering off topic).