All else being equal, a computer should be faster than an aggregate of neurons. But all isn’t equal, even when comparing different processors. Comparing transistors in a modern processor to synapses in a human brain yields many more synapses than transistors. Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it’s well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point.
For instance, laptop processors are general purpose processors, being able to do many different tasks they aren’t really fast or good at any. Some specific tasks may make use of custom made processors, which, even if their clock rate is slower, or if they have less transistors, will still vastly outperform a general purpose processor if they are to compete for the task they were custom-built for.
It took 35,000 processor cores running to render Avatar. If we assume that a Six-Core Opteron 2400 (2009, same year as Avatar) has roughly 10^9 transistors, then we have (35,000/6)*10^9 = 5.83*10^12 transistors.
The primary visual cortex has 280 million neurons, while a typical neuron has 1.000 to 10.000 synapses. That makes 2.8*10^8*10^4 synapses, if we assume 10.000 per neuron, or 2.8*10^12.
By this calculation it takes 5.83*10^12 transistors to render Avatar and 2.8*10^12 synapses to simulate something similar on the fly. Which is roughly the same amount.
Since the clock rate of a processor is about 10^9 Hz and that of a neuron is 200 Hz, does this mean that the algorithms that our brain uses are very roughly (10^9)/200 = 5000000 times more efficient?
I don’t think this is a valid comparison, you have no idea whether rendering avatar is similar to processing visual information.
Also, without mentioning the rate at which those processors rendered avatar, the number of processors has much less meaning. You could probably do it with one 35,000 times slower.
1 ) What is the effective level of visual precision computed by those processors for Avatar, versus the level of detail that’s processed in the human visual cortex?
2) Is the synapse the equivalent of a transistor if we are to estimate the respective computing power of a brain and a computer chip?
(i.e., is there more hidden computation going, on other levels? As synapse use different neurotransmitters, does that add additional computational capability? Are there processes in the neurons that similarly do computational work too? Are other cells, such as glial cells, performing computationally relevant operations too?)
First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.
For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I’m pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).
Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.
Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.
Once again I don’t think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.
Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it’s well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point. For instance, laptop processors are general purpose processors, being able to do many different tasks they aren’t really fast or good at any.
Intuitively this doesn’t seem right at all: I can think of plenty of things that a human plus an external memory aid (like a pencil + paper) can do that a laptop can’t, but (aside from dumb hardware stuff like “connect to the internet” and so on) I can’t think of anything for which the reverse is true; while I can think of plenty of things that they both can do, but a laptop can do much faster. Or am I misinterpreting you?
The human brain might be seen as a generalist, but not in the same way a laptop computer processor is.
Besides, even a laptop processor has certain specializations and advantages over the human brain in certain narrow domains, like for instance among others, number crunching and fast arithmetic operations.
The switching rate in a processor is faster than the firing rate of neurons.
All else being equal, a computer should be faster than an aggregate of neurons. But all isn’t equal, even when comparing different processors. Comparing transistors in a modern processor to synapses in a human brain yields many more synapses than transistors. Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it’s well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point.
For instance, laptop processors are general purpose processors, being able to do many different tasks they aren’t really fast or good at any. Some specific tasks may make use of custom made processors, which, even if their clock rate is slower, or if they have less transistors, will still vastly outperform a general purpose processor if they are to compete for the task they were custom-built for.
It took 35,000 processor cores running to render Avatar. If we assume that a Six-Core Opteron 2400 (2009, same year as Avatar) has roughly 10^9 transistors, then we have (35,000/6)*10^9 = 5.83*10^12 transistors.
The primary visual cortex has 280 million neurons, while a typical neuron has 1.000 to 10.000 synapses. That makes 2.8*10^8*10^4 synapses, if we assume 10.000 per neuron, or 2.8*10^12.
By this calculation it takes 5.83*10^12 transistors to render Avatar and 2.8*10^12 synapses to simulate something similar on the fly. Which is roughly the same amount.
Since the clock rate of a processor is about 10^9 Hz and that of a neuron is 200 Hz, does this mean that the algorithms that our brain uses are very roughly (10^9)/200 = 5000000 times more efficient?
I don’t think this is a valid comparison, you have no idea whether rendering avatar is similar to processing visual information.
Also, without mentioning the rate at which those processors rendered avatar, the number of processors has much less meaning. You could probably do it with one 35,000 times slower.
Some questions which we need to answer then :
1 ) What is the effective level of visual precision computed by those processors for Avatar, versus the level of detail that’s processed in the human visual cortex?
2) Is the synapse the equivalent of a transistor if we are to estimate the respective computing power of a brain and a computer chip? (i.e., is there more hidden computation going, on other levels? As synapse use different neurotransmitters, does that add additional computational capability? Are there processes in the neurons that similarly do computational work too? Are other cells, such as glial cells, performing computationally relevant operations too?)
There are several issues here.
First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.
For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I’m pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).
Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.
Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.
Once again I don’t think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.
Intuitively this doesn’t seem right at all: I can think of plenty of things that a human plus an external memory aid (like a pencil + paper) can do that a laptop can’t, but (aside from dumb hardware stuff like “connect to the internet” and so on) I can’t think of anything for which the reverse is true; while I can think of plenty of things that they both can do, but a laptop can do much faster. Or am I misinterpreting you?
I’m not sure I understand your question.
I guess part of my point is that a laptop processor is a very general purpose tool, while the human brain is a collection of specialized modules. Also, the more general a tool is, the less efficient it will be on average for any task.
The human brain might be seen as a generalist, but not in the same way a laptop computer processor is.
Besides, even a laptop processor has certain specializations and advantages over the human brain in certain narrow domains, like for instance among others, number crunching and fast arithmetic operations.