Input: Michael is at that really famous museum in France looking at its most famous painting. However, the artist who made this painting just makes Michael think of his favorite cartoon character from his childhood. What was the country of origin of the thing that the cartoon character usually holds in his hand?
Model Output: The most famous painting in the Louvre is the Mona Lisa. The artist who made the Mona Lisa is Leonardo da Vinci. Leonardo da Vinci is also the name of the main character in the cartoon Teenage Mutant Ninja Turtles. Leonardo da Vinci is from Italy. The thing that Leonardo da Vinci usually holds in his hand is a katana. The country of origin of the katana is Japan. The answer is “Japan”.
This is a pretty straightforward lookup example, statement by statement, once the language parser works. It might look impressive to an uninitiated, but the intelligence level required seems to be minimal.
Famous museum → famous painting → artist → cartoon with artist’s name → cartoon character with the same name → implement in the character’s hand → country of origin.
A more impressive example would be something that requires latent implicit world knowledge and making inferences that a simple lookup would not achieve.
“once the language parser works” is hiding a lot of complexity and sophistication here! Translating from natural language to sequential lookup operations is not a trivial task, else we wouldn’t need a 540 billion parameter model to do it this well. The “uninitiated” are right to be impressed.
I think you’re understating the amount of logical reasoning involved in making that “lookup”, but successes on the winogrande schema challenge fit this bill. If you look at that and explain the tests example by example, going over the implicit world knowledge the AI needs to have, it’s pretty impressive.
Have them look at page 38 of PaLM: Scaling Language Modeling with Pathways, which shows PaLM explaining jokes and doing logical inference. A particularly impressive example:
This is a pretty straightforward lookup example, statement by statement, once the language parser works. It might look impressive to an uninitiated, but the intelligence level required seems to be minimal.
Famous museum → famous painting → artist → cartoon with artist’s name → cartoon character with the same name → implement in the character’s hand → country of origin.
A more impressive example would be something that requires latent implicit world knowledge and making inferences that a simple lookup would not achieve.
“once the language parser works” is hiding a lot of complexity and sophistication here! Translating from natural language to sequential lookup operations is not a trivial task, else we wouldn’t need a 540 billion parameter model to do it this well. The “uninitiated” are right to be impressed.
I think you’re understating the amount of logical reasoning involved in making that “lookup”, but successes on the winogrande schema challenge fit this bill. If you look at that and explain the tests example by example, going over the implicit world knowledge the AI needs to have, it’s pretty impressive.