a) It doesn’t actually end with text that follows the instructions; a “good” output (which GPT-3 fails in this case) would just be to list more instructions.
b) It doesn’t make sense to try to get GPT-3 to talk about itself in the completion. GPT-3 would, to the extent it understands the instructions, be talking about whoever it thinks wrote the prompt.
Two big issues I see with the prompt:
a) It doesn’t actually end with text that follows the instructions; a “good” output (which GPT-3 fails in this case) would just be to list more instructions.
b) It doesn’t make sense to try to get GPT-3 to talk about itself in the completion. GPT-3 would, to the extent it understands the instructions, be talking about whoever it thinks wrote the prompt.