Critics like to point out that DL requires tons of data, but so does the human brain.
Both deep networks and the human brain require lots of data, but the kind of data they require is not the same. Humans engage mostly in semi-supervised learning, where supervised data comprises a small fraction of the total. They also manage feats of “one-shot learning” (making critically-important generalizations from single datapoints) that are simply not feasible for neural networks or indeed other ‘machine learning’ methods.
A few hundred TitanX’s can muster up perhaps a petaflop of compute.
Could you elaborate? I think this number is too high by roughly one order of magnitude.
The high end estimate of the brain is 10 petaflops (100 trillion synapses * 100 hz max firing rate).
Estimating the computational capability of the human brain is very difficult. Among other things, we don’t know what the neuroglia cells may be up to, and these are just as numerous as neurons.
Both deep networks and the human brain require lots of data, but the kind of data they require is not the same. Humans engage mostly in semi-supervised learning, where supervised data comprises a small fraction of the total.
This is probably a misconception for several reasons. Firstly, given that we don’t fully understand the learning mechanisms in the brain yet, it’s unlikely that it’s mostly one thing. Secondly, we have some pretty good evidence for reinforcement learning in the cortex, hippocampus, and basal ganglia. We have evidence for internally supervised learning in the cerebellum, and unsupervised learning in the cortex.
The point being: these labels aren’t all that useful. Efficient learning is multi-objective and doesn’t cleanly divide into these narrow categories.
The best current guess for questions like this is almost always to guess that the brain’s solution is highly efficient, given it’s constraints.
In the situation where a go player experiences/watches a game between two other players far above one’s own current skill, the optimal learning update is probably going to be a SL style update. Even if you can’t understand the reasons behind the moves yet, it’s best to compress them into the cortex for later. If you can do a local search to understand why the move is good, then that is even better and it becomes more like RL, but again, these hard divisions are arbitrary and limiting.
A few hundred TitanX’s can muster up perhaps a petaflop of compute.
Could you elaborate? I think this number is too high by roughly one order of magnitude.
The GTX TitanX has a peak perf of 6.1 terraflops, so you’d need only a few hundred to get a petaflop supercomputer (more specifically, around 175).
The high end estimate of the brain is 10 petaflops (100 trillion synapses * 100 hz max firing rate).
Estimating the computational capability of the human brain is very difficult. Among other things, we don’t know what the neuroglia cells may be up to, and these are just as numerous as neurons.
It’s just a circuit, and it obeys the same physical laws. We have this urge to mystify it for various reasons. Neuroglia can not possibly contribute more to the total compute power than the neurons, based on simple physics/energy arguments. It’s another stupid red herring like quantum woo.
These estimates are only validated when you can use them to make predictions. And if you have the right estimates (brain equivalent to 100 terraflops ish, give or take an order of magnitude), you can roughly predict the outcome of many comparisons between brain circuits vs equivalent ANN circuits (more accurately than using the wrong estimates).
This is probably a misconception for several reasons. Firstly, given that we don’t fully understand the learning mechanisms in the brain yet, it’s unlikely that it’s mostly one thing …
We don’t understand the learning mechanisms yet, but we’re quite familiar with the data they use as input. “Internally” supervised learning is just another term for semi-supervised learning anyway. Semi-supervised learning is plenty flexible enough to encompass the “multi-objective” features of what occurs in the brain.
The GTX TitanX has a peak perf of 6.1 terraflops, so you’d need only a few hundred to get a petaflop supercomputer (more specifically, around 175).
Raw and “peak performance” FLOPS numbers should be taken with a grain of salt. Anyway, given that a TitanX apparently draws as much as 240W of power at full load, your “petaflop-scale supercomputer” will cost you a few hundred-thousand dollars and draw 42kW to do what the brain does within 20W or so. Not a very sensible use for that amount of computing power—except for the odd publicity stunt, I suppose. Like playing Go.
It’s just a circuit, and it obeys the same physical laws.
Of course. Neuroglia are not magic or “woo”. They’re physical things, much like silicon chips and neurons.
Raw and “peak performance” FLOPS numbers should be taken with a grain of salt.
Yeah, but in this case the best convolution and gemm codes can reach like 98% efficiency for the simple standard algorithms and dense input—which is what most ANNs use for about everything.
given that a TitanX apparently draws as much as 240W of power at full load, your “petaflop-scale supercomputer” will cost you a few hundred-thousand dollars and draw 42kW to do what the brain does within 20W or so
Well, in this case of Go and for an increasing number of domains, it can do far more than any brain—learns far faster. Also, the current implementations are very very far from optimal form. There is at least another 100x to 1000x easy perf improvement in the years ahead. So what 100 gpus can do now will be accomplished by a single GPU in just a year or two.
It’s just a circuit, and it obeys the same physical laws.
Of course. Neuroglia are not magic or “woo”. They’re physical things, much like silicon chips and neurons.
Right, and they use a small fraction of the energy budget, and thus can’t contribute much to the computational power.
Well, in this case of Go and for an increasing number of domains, it can do far more than any brain—learns far faster.
This might actually be the most interesting thing about AlphaGo. Domain experts who have looked at its games have marveled most at how truly “book-smart” it is. Even though it has not shown a lot of creativity or surprising moves (indeed, it was comparatively weak at the start of Game 1), it has fully internalized its training and can always come up with the “standard” play.
Right, and they use a small fraction of the energy budget, and thus can’t contribute much to the computational power.
Not necessarily—there might be a speed vs. energy-per-op tradeoff, where neurons specialize in quick but energy-intensive computation, while neuroglia just chug along in the background. We definitely see such a tradeoff in silicon devices.
Domain experts who have looked at its games have marveled most at how truly “book-smart” it is. Even though it has not shown a lot of creativity or surprising moves (indeed, it was comparatively weak at the start of Game 1), it has fully internalized its training and can always come up with the “standard” play.
Do you have links to such analyses? I’d be interested in reading them.
Both deep networks and the human brain require lots of data, but the kind of data they require is not the same. Humans engage mostly in semi-supervised learning, where supervised data comprises a small fraction of the total. They also manage feats of “one-shot learning” (making critically-important generalizations from single datapoints) that are simply not feasible for neural networks or indeed other ‘machine learning’ methods.
Could you elaborate? I think this number is too high by roughly one order of magnitude.
Estimating the computational capability of the human brain is very difficult. Among other things, we don’t know what the neuroglia cells may be up to, and these are just as numerous as neurons.
This is probably a misconception for several reasons. Firstly, given that we don’t fully understand the learning mechanisms in the brain yet, it’s unlikely that it’s mostly one thing. Secondly, we have some pretty good evidence for reinforcement learning in the cortex, hippocampus, and basal ganglia. We have evidence for internally supervised learning in the cerebellum, and unsupervised learning in the cortex.
The point being: these labels aren’t all that useful. Efficient learning is multi-objective and doesn’t cleanly divide into these narrow categories.
The best current guess for questions like this is almost always to guess that the brain’s solution is highly efficient, given it’s constraints.
In the situation where a go player experiences/watches a game between two other players far above one’s own current skill, the optimal learning update is probably going to be a SL style update. Even if you can’t understand the reasons behind the moves yet, it’s best to compress them into the cortex for later. If you can do a local search to understand why the move is good, then that is even better and it becomes more like RL, but again, these hard divisions are arbitrary and limiting.
The GTX TitanX has a peak perf of 6.1 terraflops, so you’d need only a few hundred to get a petaflop supercomputer (more specifically, around 175).
It’s just a circuit, and it obeys the same physical laws. We have this urge to mystify it for various reasons. Neuroglia can not possibly contribute more to the total compute power than the neurons, based on simple physics/energy arguments. It’s another stupid red herring like quantum woo.
These estimates are only validated when you can use them to make predictions. And if you have the right estimates (brain equivalent to 100 terraflops ish, give or take an order of magnitude), you can roughly predict the outcome of many comparisons between brain circuits vs equivalent ANN circuits (more accurately than using the wrong estimates).
We don’t understand the learning mechanisms yet, but we’re quite familiar with the data they use as input. “Internally” supervised learning is just another term for semi-supervised learning anyway. Semi-supervised learning is plenty flexible enough to encompass the “multi-objective” features of what occurs in the brain.
Raw and “peak performance” FLOPS numbers should be taken with a grain of salt. Anyway, given that a TitanX apparently draws as much as 240W of power at full load, your “petaflop-scale supercomputer” will cost you a few hundred-thousand dollars and draw 42kW to do what the brain does within 20W or so. Not a very sensible use for that amount of computing power—except for the odd publicity stunt, I suppose. Like playing Go.
Of course. Neuroglia are not magic or “woo”. They’re physical things, much like silicon chips and neurons.
Yeah, but in this case the best convolution and gemm codes can reach like 98% efficiency for the simple standard algorithms and dense input—which is what most ANNs use for about everything.
Well, in this case of Go and for an increasing number of domains, it can do far more than any brain—learns far faster. Also, the current implementations are very very far from optimal form. There is at least another 100x to 1000x easy perf improvement in the years ahead. So what 100 gpus can do now will be accomplished by a single GPU in just a year or two.
Right, and they use a small fraction of the energy budget, and thus can’t contribute much to the computational power.
This might actually be the most interesting thing about AlphaGo. Domain experts who have looked at its games have marveled most at how truly “book-smart” it is. Even though it has not shown a lot of creativity or surprising moves (indeed, it was comparatively weak at the start of Game 1), it has fully internalized its training and can always come up with the “standard” play.
Not necessarily—there might be a speed vs. energy-per-op tradeoff, where neurons specialize in quick but energy-intensive computation, while neuroglia just chug along in the background. We definitely see such a tradeoff in silicon devices.
Do you have links to such analyses? I’d be interested in reading them.
EDIT: Ah, I guess you were referring to this: https://www.reddit.com/r/MachineLearning/comments/43fl90/synopsis_of_top_go_professionals_analysis_of/