Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.
I’d tend to disagree with this; we have a pretty good idea of how some areas of the brain work (V1 cortex), we are making good progress in understanding how other parts work (cortical microcircuits, etc.) and we haven’t seen anything to indicate that other areas of the brain work using extremely far-fetched and alien principles to what we already know.
But I always considered ‘understanding the brain’ to be a bit overrated, as the brain is an evolutionary hodge-podge, a big snowball of accumulated junk that’s been rolling down the slope for 500 million years. In the future we’re eventually going to understand the brain for sentimental reasons, but I’d give only 1% probability that understanding it is necessary for the intelligence explosion to occur. Already we have machines that are capable of doing tasks corresponding to areas of the brain that we have no idea of how they work. In fact we aren’t even sure how our machines work either! We just know they do. We’re far more likely to stumble upon AI than to create it through a forced effort of brain emulation.
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc… if you sum those things up, and look at the model, I would say it ignores about that many things). I’m fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.
The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations—same for the tonotopic auditory areas—as soon as V4, we are already completely lost (for those who don’t know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.
All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals’ entire cortices, so that’s not surprising). So, as far as building AI is concerned, so what if we don’t understand V4 yet, if we can produce software that is that good at image processing?
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That’s what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly—if they actually needed it—AI money should reach Nick, Paul and Stuart before our team.) We’ll be presenting it in Oxford, tomorrow??
Shhh, don’t tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit
I’d tend to disagree with this; we have a pretty good idea of how some areas of the brain work (V1 cortex), we are making good progress in understanding how other parts work (cortical microcircuits, etc.) and we haven’t seen anything to indicate that other areas of the brain work using extremely far-fetched and alien principles to what we already know.
But I always considered ‘understanding the brain’ to be a bit overrated, as the brain is an evolutionary hodge-podge, a big snowball of accumulated junk that’s been rolling down the slope for 500 million years. In the future we’re eventually going to understand the brain for sentimental reasons, but I’d give only 1% probability that understanding it is necessary for the intelligence explosion to occur. Already we have machines that are capable of doing tasks corresponding to areas of the brain that we have no idea of how they work. In fact we aren’t even sure how our machines work either! We just know they do. We’re far more likely to stumble upon AI than to create it through a forced effort of brain emulation.
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc… if you sum those things up, and look at the model, I would say it ignores about that many things). I’m fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.
The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations—same for the tonotopic auditory areas—as soon as V4, we are already completely lost (for those who don’t know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.
All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals’ entire cortices, so that’s not surprising). So, as far as building AI is concerned, so what if we don’t understand V4 yet, if we can produce software that is that good at image processing?
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That’s what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly—if they actually needed it—AI money should reach Nick, Paul and Stuart before our team.) We’ll be presenting it in Oxford, tomorrow?? Shhh, don’t tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit