It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.
I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, “It’s” is most likely a typo for “Isn’t.”
Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.
I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, “It’s” is most likely a typo for “Isn’t.”
Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.