But that robotics project was viewed by Eliezer as an example of carefully-designed biological imitation in which the mechanism of action was known by the researchers into the deep details. Across multiple posts, Eliezer’s views from this time period emphasize that he believes that AGI can only come from a well-understood AI architecture—either a detailed imitation of the brain, or a crafted logic-based approach. This robotics project was an example of the latter, despite the fact that it used neurons.
This robot ran on a “neural network” built by detailed study of biology. The network had twenty neurons or so. Each neuron had a separate name and its own equation. And believe me, the robot’s builders knew how that network worked.
Where does that fit into the grand dichotomy? Is it top-down? Is it bottom-up? Calling it “parallel” or “distributed” seems like kind of a silly waste when you’ve only got 20 neurons—who’s going to bother multithreading that?
So this would be, in my view, another clear example of Eliezer being excited about an AI paradigm that ultimately did not lead to the black-box neural network-based LLMs that actually seem to have put us on the path to AGI.
But that robotics project was viewed by Eliezer as an example of carefully-designed biological imitation in which the mechanism of action was known by the researchers into the deep details. Across multiple posts, Eliezer’s views from this time period emphasize that he believes that AGI can only come from a well-understood AI architecture—either a detailed imitation of the brain, or a crafted logic-based approach. This robotics project was an example of the latter, despite the fact that it used neurons.
So this would be, in my view, another clear example of Eliezer being excited about an AI paradigm that ultimately did not lead to the black-box neural network-based LLMs that actually seem to have put us on the path to AGI.