Hi, I’m Terry Stewart, one of the researchers on the project.
I like the roadmap, and it seems to be the right way to go if the goal is to emulate a particular person’s brain. However, our whole goal is to understand the human brain, so we want to reach for whole-system understanding, which is exactly what the WBE approach doesn’t need.
I believe that the approach we are taking is a novel method for understanding the human brain that has a reasonable chance of producing results faster than the pure WBE approach (or, at the very least, the advances in understand provided by our approach may make WBE significantly simpler). Of course, to make that claim, I need to justify why our approach is significantly different from hundreds of other researchers who also are trying to understand the human brain.
The key difference is that we have a neural compiler: a system for taking a mathematical description of the function to be computed and the properties of the neurons involved, and producing a set of connection weights that will cause those neurons to approximate that function. This is a radically different approach to building neural networks, and we’re still working out the consequences of this compiler. There’s a technical overview of this system here [http://ctnsrv.uwaterloo.ca/cnrglab/node/297] and the system itself is opensource and available at [http://nengo.ca]. This is what let us build Spaun—we took a bunch of descriptions of the function of different brain areas, converted them into math, and compiled them into neurons.
Right now, we use a very simple neuron model (LIF—basically the simplest spiking neuron model), but the technique is applicable to any type of neuron we feel like using (and have the computational power to handle). An interesting part of the research is determining what increased functional capacities you get from using more complex neural models.
Indeed, the main thing that makes me think that this is a novel and useful way of understanding the brain is that we get constraints on the types of computations that can be performed. For example, it turns out to be really easy to compute the circular convolution of two 500-dimensional vectors (an operation we need for our approach to symbol-like reasoning), but very hard to get neurons to find which of five numbers is the largest (the max function). These sorts of constraints have cused us to examine very different types of algorithms for reasoning, and we found that certain inductive reasoning problems are surprisingly easy with these sorts of algorithms [http://ctnsrv.uwaterloo.ca/cnrglab/node/16].
This seems like a great idea—if we put together a concrete list of questions to ask, it could be worth his time to come over.
If anyone wants to ask any questions, leave a comment and maybe we can get some direct answers. (But make sure your question isn’t in the AmA, first!)
Q: What do you all make of Bostrom and Sandberg’s Whole Brain Emulation Roadmap?
Hi, I’m Terry Stewart, one of the researchers on the project.
I like the roadmap, and it seems to be the right way to go if the goal is to emulate a particular person’s brain. However, our whole goal is to understand the human brain, so we want to reach for whole-system understanding, which is exactly what the WBE approach doesn’t need.
I believe that the approach we are taking is a novel method for understanding the human brain that has a reasonable chance of producing results faster than the pure WBE approach (or, at the very least, the advances in understand provided by our approach may make WBE significantly simpler). Of course, to make that claim, I need to justify why our approach is significantly different from hundreds of other researchers who also are trying to understand the human brain.
The key difference is that we have a neural compiler: a system for taking a mathematical description of the function to be computed and the properties of the neurons involved, and producing a set of connection weights that will cause those neurons to approximate that function. This is a radically different approach to building neural networks, and we’re still working out the consequences of this compiler. There’s a technical overview of this system here [http://ctnsrv.uwaterloo.ca/cnrglab/node/297] and the system itself is opensource and available at [http://nengo.ca]. This is what let us build Spaun—we took a bunch of descriptions of the function of different brain areas, converted them into math, and compiled them into neurons.
Right now, we use a very simple neuron model (LIF—basically the simplest spiking neuron model), but the technique is applicable to any type of neuron we feel like using (and have the computational power to handle). An interesting part of the research is determining what increased functional capacities you get from using more complex neural models.
Indeed, the main thing that makes me think that this is a novel and useful way of understanding the brain is that we get constraints on the types of computations that can be performed. For example, it turns out to be really easy to compute the circular convolution of two 500-dimensional vectors (an operation we need for our approach to symbol-like reasoning), but very hard to get neurons to find which of five numbers is the largest (the max function). These sorts of constraints have cused us to examine very different types of algorithms for reasoning, and we found that certain inductive reasoning problems are surprisingly easy with these sorts of algorithms [http://ctnsrv.uwaterloo.ca/cnrglab/node/16].