That’s not my point. Of course everything is reducible to Turing machine. In theory. However, it does not mean you can make this reduction practically. Or it would be very inefficient. Von Neumann architecture implies its own hierarchy of information processing, which is good for programming of various kinds of formal algorithms. However, IMHO, it does not support a hierarchy of information processing required for AI, which should be a neural network similar to a human brain. You cannot program each and every algorithm or mode of behavior, a neural network is capable of producing, on a Von Neumann computer. To me, many decades of futile attempts to build AI along these lines have already proven its practical impossibility. Only understanding of how neural networks operate in nature and implementing this type of behavior can finally make a difference. And how Von Neumann architecture fits in here? I see only one possible application, modelling work of neurons. Given the complexity of a human brain (100 billion neurons, 100 trillion connections), this is a challenge for even most advanced modern supercomputers. You can count on further performance improvements, of course, since Moores law is still in effect, but this is not the kind of solution thats going to be practical. Perhaps neuronic circuits printed directly on microchips would be the hardware for future AI brains.
(if you respond by clicking “reply” at the bottom of comments, the person to whom you’re responding will be notified and it will organize your comment better)
I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I’m not under the impression that simulating neural networks is terribly challenging.
Also, most neurons are redundant (since there’s a lot of noise in a neuron). If you’re simulating something along the lines of a human brain, the very first simulations might be very challenging when you don’t know what the important parts are, but I think there’s good reason to expect dramatic simplification once you understand what the important parts are.
I would be cautious regarding noise or redundancy until we know exactly whats going on in there. Maybe we dont understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true.
I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ).
It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.
That’s not my point. Of course everything is reducible to Turing machine. In theory. However, it does not mean you can make this reduction practically. Or it would be very inefficient. Von Neumann architecture implies its own hierarchy of information processing, which is good for programming of various kinds of formal algorithms. However, IMHO, it does not support a hierarchy of information processing required for AI, which should be a neural network similar to a human brain. You cannot program each and every algorithm or mode of behavior, a neural network is capable of producing, on a Von Neumann computer. To me, many decades of futile attempts to build AI along these lines have already proven its practical impossibility. Only understanding of how neural networks operate in nature and implementing this type of behavior can finally make a difference. And how Von Neumann architecture fits in here? I see only one possible application, modelling work of neurons. Given the complexity of a human brain (100 billion neurons, 100 trillion connections), this is a challenge for even most advanced modern supercomputers. You can count on further performance improvements, of course, since Moore
s law is still in effect, but this is not the kind of solution that
s going to be practical. Perhaps neuronic circuits printed directly on microchips would be the hardware for future AI brains.(if you respond by clicking “reply” at the bottom of comments, the person to whom you’re responding will be notified and it will organize your comment better)
I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I’m not under the impression that simulating neural networks is terribly challenging.
Also, most neurons are redundant (since there’s a lot of noise in a neuron). If you’re simulating something along the lines of a human brain, the very first simulations might be very challenging when you don’t know what the important parts are, but I think there’s good reason to expect dramatic simplification once you understand what the important parts are.
I would be cautious regarding noise or redundancy until we know exactly what
s going on in there. Maybe we don
t understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true. I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ). It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.