For those of you not familiar with the technology, Python is a programming language not know for speed and the Raspberry Pi is a cheap, low-powered computer smaller than your palm.
For those of you familiar with the technology, this is just another reason why Python is amazing.
Basic Python is very slow, but numerical computing libraries such as Numpy are almost as fast as C, and Cython can compile Python into C if you add in type declarations. (more reasons why Python is awsome!)
I would imagine that you might use numerical computing libraries for neural simulations, so their program might have been running at close to C speeds.
If you look at the description you find that the model used is very simple and bois down to probably less than N*M*2 machine instructions per cycle (N=number of neurons, here 302, M=average fan in). Because the operation is really only sum and threashold. I can only guess at M but even if we approximate it with N a raspberry pi with ARM Core M 700 MHz should be able to run a nematode connectome at about 4000x its natural speed.
The point here is not the necessary speed but the ease of simulation and visualization of effects.
I don’t know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.
The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is not required. I have no idea how the author plans to avoid unintended data loss in the not-unusual case when some UDP packets are dropped. Retransmission (TCP) is not really an option either, as the system has to run in real time.
If each simulated “neuron” is just a linear threshold unit, as described by the paper, using a whole process to run it and exchange messages by UDP looks like a terribly wasteful architecture. Maybe the author wants to eventually implement a computationally expensive biologically accurate neuron model, but still I don’t see the point of this architecture, as even if the individual neurons were biologically accurate, the overall simulation wouldn’t, due to the non-deterministc delays and packet lossess introduced by UDP messaging.
Do you know how much processing power is required to run it in real-time?
In the original article (PDF, free to download after you register) I find:
“The artificial connectome has been extended to a single application written in Python and run on a Raspberry Pi computer.”
The original article also links this YouTube video, for those who are interested.
For those of you not familiar with the technology, Python is a programming language not know for speed and the Raspberry Pi is a cheap, low-powered computer smaller than your palm.
For those of you familiar with the technology, this is just another reason why Python is amazing.
Basic Python is very slow, but numerical computing libraries such as Numpy are almost as fast as C, and Cython can compile Python into C if you add in type declarations. (more reasons why Python is awsome!)
I would imagine that you might use numerical computing libraries for neural simulations, so their program might have been running at close to C speeds.
This gives us an upper bound of about 1.5MB ram / 100KFLOPs/10 cents per neuron. Possibly a lot lower.
If you look at the description you find that the model used is very simple and bois down to probably less than N*M*2 machine instructions per cycle (N=number of neurons, here 302, M=average fan in). Because the operation is really only sum and threashold. I can only guess at M but even if we approximate it with N a raspberry pi with ARM Core M 700 MHz should be able to run a nematode connectome at about 4000x its natural speed.
The point here is not the necessary speed but the ease of simulation and visualization of effects.
I don’t know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.
The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is not required. I have no idea how the author plans to avoid unintended data loss in the not-unusual case when some UDP packets are dropped. Retransmission (TCP) is not really an option either, as the system has to run in real time.
If each simulated “neuron” is just a linear threshold unit, as described by the paper, using a whole process to run it and exchange messages by UDP looks like a terribly wasteful architecture.
Maybe the author wants to eventually implement a computationally expensive biologically accurate neuron model, but still I don’t see the point of this architecture, as even if the individual neurons were biologically accurate, the overall simulation wouldn’t, due to the non-deterministc delays and packet lossess introduced by UDP messaging.
I’m unimpressed.