SHRDLU was very impressive by any standards. It was released in the very early 1970s, when computers had only a few kilobytes of memory. Fortran was only about 15 years old. People had only just started to program. And then using paper tape.
SHRDLU took a number of preexisting ideas about language processing and planning and combines them beautifully. And SHRDLU really did understand its tiny world of logical blocks.
Given how much had been achieved in the decade prior to SHRDLU it was entirely reasonable to assume that real intelligence would be achieved in the relatively near future. Which is, of course, the point of the article.
(Winograd did cheat a bit by using Lisp. Today such a program would need to be written in C++ or possibly Java which takes much longer. Progress is not unidirectional.)
I hate the term “Neural Network”, as do many serious people working in the field.
There are Perceptrons which were inspired by neurons but are quite different. There are other related techniques that optimize in various ways. There are real neurons which are very complex and rather arbitrary. And then there is the greatly simplified Integrate and Fire (IF) abstraction of a neuron, often with Hebbian learning added.
Perceptrons solve practical problems, but are not the answer to everything as some would have you believe. There are new and powerful kernal methods that can automatically condition data which extend perceptrons. There are many other algorithms such as learning hidden Markov models. IF neurons are used to try and understand brain functionality, but are not useful for solving real problems (far too computationally expensive for what they do).
Which one of these quite different technologies is being referred to as “Neural Network”?
The idea of wiring perceptrons back onto themselves with state is old. Perceptrons have been shown to be able to emulate just about any function, so yes, they would be Turing complete. Being able to learn meanginful weights for such “recurrent” networks is relatively recent (1990s?).