GPT-3′s goal is to accurately predict a text sequence. Whether GPT-3 is capable of reason, or whether we can get it to explicitly reason is two different questions.
If I had you read Randall Munroe’s book “what if” but tore out one page and asked you to predict what will be written as the answer, there’s a few good strategies that come to mind.
One strategy would be to pick random verbs and nouns from previous questions and hope some of them will be relevant for this question as well. This strategy will certainly do better than if you picked your verbs and nouns from a dictionary.
Another, much better strategy, would be to think about the question and actually work out the answer. Your answer will most likely have many verbs and nouns in common, the numbers you supply will certainly be closer than if they were picked at random! The problem is that this requires actual intelligence, whereas the former strategy can be accomplished with very simple pattern matching.
To accurately predict certain sequences of text, you will get better performance if you’re actually capable of reasoning. So the best version of GPT, needs to develop intelligence to get the best results.
I think it has, and is using varying degrees of reason to answer any question depending on how likely it thinks the intelligent answer will be to predict the sequence. This why it’s difficult to wrangle reason out of GPT-3, it doesn’t always think using reason will help it!
Similarly it can be difficult to wrangle intelligent reasoning out of humans, because that isn’t what we’re optimized to output. Like many critiques I see of GPT-3, I could criticize humans in a similar manner:
“I keep asking them for an intelligent answer to the dollar value of life, but they just keep telling me how all life has infinite value to signal their compassion.”
Obviously humans are capable of answering the question, we behave every day as if life has a dollar value, but good luck getting us to explicitly admit that! Our intelligence is optimized towards all manner of things different from explicitly generating a correct answer.
So is GPT-3, and just like most humans debatably are intelligent, so is GPT-3.
GPT-3′s goal is to accurately predict a text sequence. Whether GPT-3 is capable of reason, or whether we can get it to explicitly reason is two different questions.
If I had you read Randall Munroe’s book “what if” but tore out one page and asked you to predict what will be written as the answer, there’s a few good strategies that come to mind.
One strategy would be to pick random verbs and nouns from previous questions and hope some of them will be relevant for this question as well. This strategy will certainly do better than if you picked your verbs and nouns from a dictionary.
Another, much better strategy, would be to think about the question and actually work out the answer. Your answer will most likely have many verbs and nouns in common, the numbers you supply will certainly be closer than if they were picked at random! The problem is that this requires actual intelligence, whereas the former strategy can be accomplished with very simple pattern matching.
To accurately predict certain sequences of text, you will get better performance if you’re actually capable of reasoning. So the best version of GPT, needs to develop intelligence to get the best results.
I think it has, and is using varying degrees of reason to answer any question depending on how likely it thinks the intelligent answer will be to predict the sequence. This why it’s difficult to wrangle reason out of GPT-3, it doesn’t always think using reason will help it!
Similarly it can be difficult to wrangle intelligent reasoning out of humans, because that isn’t what we’re optimized to output. Like many critiques I see of GPT-3, I could criticize humans in a similar manner:
“I keep asking them for an intelligent answer to the dollar value of life, but they just keep telling me how all life has infinite value to signal their compassion.”
Obviously humans are capable of answering the question, we behave every day as if life has a dollar value, but good luck getting us to explicitly admit that! Our intelligence is optimized towards all manner of things different from explicitly generating a correct answer.
So is GPT-3, and just like most humans debatably are intelligent, so is GPT-3.