“Everyone” has known about holography since “forever.” That’s not the point of the article. Yevick’s point is that there are two very different kinds of objects in the world and two very different kinds of computing regimes. One regime is well-suited for one kind of object while the other is well-suited for the other kind of object. Early AI tried to solve all problems with one kind of computing. Current AI is trying to solve all problems with a different kind of computing. If Yevick was right, then both approaches are inadequate. She may have been on to something and she may not have been. But as far as I know, no one has followed up on her insight.
I strongly suspect there is, but don’t have to tools for it myself. Have you seen my post, Toward a Theory of Intelligence: Did Miriam Yevick know something in 1975 that Bengio, LeCun, and Hinton did not know in 2018?
Also, check out the quotation from Francois Chollett near the end of this: The role of philosophical thinking in understanding large language models: Calibrating and closing the gap between first-person experience and underlying mechanisms.
These ideas weren’t unfamiliar to Hinton. For example, see the following paper on “Holographic Reduced Representations” by a PhD student of his from 1991: https://www.ijcai.org/Proceedings/91-1/Papers/006.pdf
“Everyone” has known about holography since “forever.” That’s not the point of the article. Yevick’s point is that there are two very different kinds of objects in the world and two very different kinds of computing regimes. One regime is well-suited for one kind of object while the other is well-suited for the other kind of object. Early AI tried to solve all problems with one kind of computing. Current AI is trying to solve all problems with a different kind of computing. If Yevick was right, then both approaches are inadequate. She may have been on to something and she may not have been. But as far as I know, no one has followed up on her insight.