(if you respond by clicking “reply” at the bottom of comments, the person to whom you’re responding will be notified and it will organize your comment better)
I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I’m not under the impression that simulating neural networks is terribly challenging.
Also, most neurons are redundant (since there’s a lot of noise in a neuron). If you’re simulating something along the lines of a human brain, the very first simulations might be very challenging when you don’t know what the important parts are, but I think there’s good reason to expect dramatic simplification once you understand what the important parts are.
I would be cautious regarding noise or redundancy until we know exactly whats going on in there. Maybe we dont understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true.
I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ).
It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.
(if you respond by clicking “reply” at the bottom of comments, the person to whom you’re responding will be notified and it will organize your comment better)
I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I’m not under the impression that simulating neural networks is terribly challenging.
Also, most neurons are redundant (since there’s a lot of noise in a neuron). If you’re simulating something along the lines of a human brain, the very first simulations might be very challenging when you don’t know what the important parts are, but I think there’s good reason to expect dramatic simplification once you understand what the important parts are.
I would be cautious regarding noise or redundancy until we know exactly what
s going on in there. Maybe we don
t understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true. I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ). It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.