The paper’s abstract does a fairly good job of summing it up, although it doesn’t explicitly mention Winograd schema questions:
The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But what does it tell us when a good semblance of a behaviour can be achieved using cheap tricks that seem to have little to do with what we intuitively imagine intelligence to be? Are these intuitions wrong, and is intelligence really just a bag of tricks? Or are the philosophers right, and is a behavioural understanding of intelligence simply too weak? I think both of these are wrong. I suggest in the context of question-answering that what matters when it comes to the science of AI is not a good semblance of intelligent behaviour at all, but the behaviour itself, what it depends on, and how it can be achieved. I go on to discuss two major hurdles that I believe will need to be cleared.
If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.
Edit: Upon searching, I also found some critiques of Levesque’s work as well, so looking up opposition to some of these points may also be a good idea.
This paper about AI from Hector J. Levesque seems to be interesting: http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf
It extensively discusses something called ‘Winograd schema questions’: If you want examples of Winograd schema questions, there is a list here: http://www.cs.nyu.edu/faculty/davise/papers/WS.html
The paper’s abstract does a fairly good job of summing it up, although it doesn’t explicitly mention Winograd schema questions:
If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.
Edit: Upon searching, I also found some critiques of Levesque’s work as well, so looking up opposition to some of these points may also be a good idea.