Suppose someone in 1900 looked at balloons and birds and decided future flying machines would have wings. They called such winged machines “birdomorphic”, and say future flying machines will be more like birds.
I feel you are using “neuromorphic” the same way. Suppose it is true that future computers will be of a Processor In Memory design. Thinking of them as “like a brain” is like thinking a fighter jet is like a sparrow because they both have wings.
Suppose a new processor architecture is developed, its basically PIM. Tensorflow runs on it. The AI software people barely notice the change.
If hypothetically that was true, that would be a specific fact not established by anything shown here.
If you are specific in what you mean by “brainlike” it would be quite a surprising fact. It would imply that the human brain is a unique pinnacle of what is possible to achieve. The human brain is shaped in a way that is focussed on things related to ancestral humans surviving in the savannah. It would be an enormous coincidence if the abstract space of computation and the nature of fundamental physical law meant that the most efficient possible mind just so happened to think in a way that looked optimised to reproductive fitness in the evolutionary environment.
It is plausible that the human brain is one near optimum out of many. That it is fundamentally impossible to make anything with an efficiency of > 100%, but its easy to reach 90% efficiency. The human brain could be one design of many that was >90%.
It is even plausible that all designs of >90% efficiency must have some feature that human brains have. Maybe all efficient flying machines must use aerofoils, but the space of efficient designs still includes birds, planes and many other possibilities.
I will claim that the space of minds at least as efficient as human minds is big. At the very least it contains minds with totally different emotions than humans, and probably minds with nothing like emotions at all. Probably minds with all sorts of features we can’t easily conceive of.
By brain-like I mostly just meant neuromorphic, so the statement is almost a tautology. DL models are all ready naturally somewhat ‘brain-like’, in the space of all ML models, as DL is a form of vague brain reverse engineering. But most of the remaining key differences ultimately stem from the low level circuit differences between von neuman and neuromorphic architectures. As just one example—DL currently uses large-batch GD style training because that is what is actually efficient on VN architecture, but will necessarily shift to brain-style small batch techniques on neuromorphic/PIM architecture as that is what efficiency dictates.
Almost a tauutology = carries very little useful information.
In this case most of the information is carried by the definition of “Neuromorphic”. A researcher proposes a new learning algorithm. You claim that if its not neuromorphic then it can’t be efficient. How do you tell if the algorithm is neuromorphic?
Suppose someone in 1900 looked at balloons and birds and decided future flying machines would have wings. They called such winged machines “birdomorphic”, and say future flying machines will be more like birds.
I feel you are using “neuromorphic” the same way. Suppose it is true that future computers will be of a Processor In Memory design. Thinking of them as “like a brain” is like thinking a fighter jet is like a sparrow because they both have wings.
Suppose a new processor architecture is developed, its basically PIM. Tensorflow runs on it. The AI software people barely notice the change.
The set of AGI models you could run efficiently on a largescale pure PIM processor is basically just the set of brain-like models.
If hypothetically that was true, that would be a specific fact not established by anything shown here.
If you are specific in what you mean by “brainlike” it would be quite a surprising fact. It would imply that the human brain is a unique pinnacle of what is possible to achieve. The human brain is shaped in a way that is focussed on things related to ancestral humans surviving in the savannah. It would be an enormous coincidence if the abstract space of computation and the nature of fundamental physical law meant that the most efficient possible mind just so happened to think in a way that looked optimised to reproductive fitness in the evolutionary environment.
It is plausible that the human brain is one near optimum out of many. That it is fundamentally impossible to make anything with an efficiency of > 100%, but its easy to reach 90% efficiency. The human brain could be one design of many that was >90%.
It is even plausible that all designs of >90% efficiency must have some feature that human brains have. Maybe all efficient flying machines must use aerofoils, but the space of efficient designs still includes birds, planes and many other possibilities.
I will claim that the space of minds at least as efficient as human minds is big. At the very least it contains minds with totally different emotions than humans, and probably minds with nothing like emotions at all. Probably minds with all sorts of features we can’t easily conceive of.
Brain-like != human brain.
By brain-like I mostly just meant neuromorphic, so the statement is almost a tautology. DL models are all ready naturally somewhat ‘brain-like’, in the space of all ML models, as DL is a form of vague brain reverse engineering. But most of the remaining key differences ultimately stem from the low level circuit differences between von neuman and neuromorphic architectures. As just one example—DL currently uses large-batch GD style training because that is what is actually efficient on VN architecture, but will necessarily shift to brain-style small batch techniques on neuromorphic/PIM architecture as that is what efficiency dictates.
Almost a tauutology = carries very little useful information.
In this case most of the information is carried by the definition of “Neuromorphic”. A researcher proposes a new learning algorithm. You claim that if its not neuromorphic then it can’t be efficient. How do you tell if the algorithm is neuromorphic?