I don’t feel at all tempted to do that anthropomorphization, and I think it’s weird that EY is acting as if this is a reasonable thing to do.
“It’s tempting to anthropomorphize GPT-3 as trying its hardest to make John smart” seems obviously incorrect if it’s explicitly phrased that way, but e.g. the “Giving GPT-3 a Turing Test” post seems to implicitly assume something like it:
This gives us a hint for how to stump the AI more consistently. We need to ask questions that no normal human would ever talk about.
Q: How many eyes does a giraffe have? A: A giraffe has two eyes.
Q: How many eyes does my foot have? A: Your foot has two eyes.
Q: How many eyes does a spider have? A: A spider has eight eyes.
Q: How many eyes does the sun have? A: The sun has one eye.
Q: How many eyes does a blade of grass have? A: A blade of grass has one eye.
Now we’re getting into surreal territory. GPT-3 knows how to have a normal conversation. It doesn’t quite know how to say “Wait a moment… your question is nonsense.” It also doesn’t know how to say “I don’t know.”
The author says that this “stumps” GPT-3, which “doesn’t know how to” say that it doesn’t know. That’s as if GPT-3 was doing its best to give “smart” answers, and just was incapable of doing so. But Nick Cammarata showed that if you just give GPT-3 a prompt where nonsense answers are called out as such, it will do just that.
“It’s tempting to anthropomorphize GPT-3 as trying its hardest to make John smart” seems obviously incorrect if it’s explicitly phrased that way, but e.g. the “Giving GPT-3 a Turing Test” post seems to implicitly assume something like it:
The author says that this “stumps” GPT-3, which “doesn’t know how to” say that it doesn’t know. That’s as if GPT-3 was doing its best to give “smart” answers, and just was incapable of doing so. But Nick Cammarata showed that if you just give GPT-3 a prompt where nonsense answers are called out as such, it will do just that.