What this indicates is not that deep learning in particular is going to be the Game Over algorithm. Rather, the background variables are looking more like “Human neural intelligence is not that complicated and current algorithms are touching on keystone, foundational aspects of it.” What’s alarming is not this particular breakthrough, but what it implies about the general background settings of the computational universe.
You could easily transpose it for the time when Checkers or Chess programs beat professional players: back then the “keystone, foundational aspect” of intelligence was thought to be the ability to do combinatorial search in large solution spaces, and scaling up to AGI was “just” a matter of engineering better heuristics. Sure, it didn’t work on Go yet, but Go players were not using a different cortical algorithm than Chess players, were they?
Or you could transpose it for the time when MCTS Go programs reached “dan” (advanced amateur) level. They still couldn’t beat professional players, but professional players were not using a different cortical algorithm than advanced amateur players, were they?
AlphaGo succeded at the current achievement by using artificial neural networks in a regime where they are know to do well. But this regime, and the type of games like Go, Chess, Checkers, Othello, etc. represent a small part of the range of human cognitive tasks. In fact, we probably find this kind of board games fascinating precisely because they are very different than the usual cognitive stimuli we deal with in everyday life.
It’s tempting to assume that the “keystone, foundational aspect” of intelligence is learning essentially the same way that artificial neural networks learn. But humans can do things like “one-shot” learning, learning from weak supervision, learning in non-stationary environments, etc. which no current neural network can do, and not just because a matter of scale or architectural “details”. Researchers generally don’t know how to make neural networks, or really any other kind of machine learning algorithm, do these things, except with massive task-specific engineering. Thus I think it’s fair to say that we still don’t know what the foundational aspects of intelligence are.
In the brain, the same circuitry that is used to solve vision is used to solve most of the rest of cognition—vision is 10% of the cortex. Going from superhuman vision to superhuman Go suggests superhuman anything/everything is getting near.
The reason being that strong Go requires both deep slow inference over huge data/time (which DL excels in, similar to what the cortex/cerebellum specialize in), combined with fast/low data inference (the MCTS part here). There is still much room for improvement in generalizing beyond current MCTS techniques, and better integration into larger scale ANNs, but that is increasingly looking straightforward.
It’s tempting to assume that the “keystone, foundational aspect” of intelligence is learning essentially the same way that artificial neural networks learn.
Yes, but only because “ANN” is enormously broad (tensor/linear algebra program space), and basically includes all possible routes to AGI (all possible approximations of bayesian inference).
But humans can do things like “one-shot” learning, learning from weak supervision, learning in non-stationary environments, etc. which no current neural network can do, and not just because a matter of scale or architectural “details”.
Bayesian methods excel at one shot learning, and are steadily integrating themselves into ANN techniques (providing the foundation needed to derive new learning and inference rules). Progress in transfer and semi-supervised learning is also progressing rapidly and the theory is all there. I don’t know about non-stationary as much, but I’d be pretty surprised if there wasn’t progress there as well.
Thus I think it’s fair to say that we still don’t know what the foundational aspects of intelligence are.
LOL. Generalized DL + MCTS is—rather obviously—a practical approximation of universal intelligence like AIXI. I doubt MCTS scales to all domains well enough, but the obvious next step is for DL to eat MCTS techniques (so that super new complex heuristic search techniques can be learned automatically).
In the brain, the same circuitry that is used to solve vision is used to solve most of the rest of cognition
And in a laptop the same circuitry that it is used to run a spreadsheet is used to play a video game.
Systems that are Turing-complete (in the limit of infinite resources) tend to have an independence between hardware and possibly many layers of software (program running on VM running on VM running on VM and so on). Things that look similar at a some levels may have lots of difference at other levels, and thus things that look simple at some levels can have lots of hidden complexity at other levels.
Going from superhuman vision
Human-level (perhaps weakly superhuman) vision is achieved only in very specific tasks where large supervised datasets are available. This is not very surprising, since even traditional “hand-coded” computer vision could achieve superhuman performances in some narrow and clearly specified tasks.
Yes, but only because “ANN” is enormously broad (tensor/linear algebra program space), and basically includes all possible routes to AGI (all possible approximations of bayesian inference).
Again, ANN are Turing-complete, therefore in principle they include literally everything, but so does the brute-force search of C programs.
In practice if you try to generate C programs by brute-force search you will get stuck pretty fast, while ANN with gradient descent training empirically work well on various kinds of practical problems, but not on all kinds practical problems that humans are good at, and how to make them work on these problems, if it even efficiently possible, is a whole open research field.
Bayesian methods excel at one shot learning
With lots of task-specific engineering.
Generalized DL + MCTS is—rather obviously—a practical approximation of universal intelligence like AIXI.
So are things like AIXI-tl, Hutter-search, Gödel machine, and so on. Yet I would not consider any of them as the “foundational aspect” of intelligence.
And in a laptop the same circuitry that it is used to run a spreadsheet is used to play a video game.
Exactly, and this a good analogy to illustrate my point. Discovering that the cortical circuitry is universal vs task-specific (like an ASIC) was a key discovery.
Human-level (perhaps weakly superhuman) vision is achieved only in very specific tasks where large supervised datasets are available.
Note I didn’t say that we have solved vision to superhuman level, but this is simply not true. Current SOTA nets can achieve human-level performance in at least some domains using modest amounts of unsupervised data combined with small amounts of supervised data.
Human vision builds on enormous amounts of unsupervised data—much larger than ImageNet. Learning in the brain is complex and multi-objective, but perhaps best described as self-supervised (unsupervised meta-learning of sub-objective functions which then can be used for supervised learning).
A five year old will have experienced perhaps 50 million seconds worth of video data. Imagenet consists of 1 million images, which is vaguely equivalent to 1 million seconds of video if we include 30x amplification for small translations/rotations.
The brain’s vision system is about 100x larger than current ‘large’ vision ANNs. But If deepmind decided to spend the cash on that and make it a huge one off research priority, do you really doubt that they could build a superhuman general vision system that learns with a similar dataset and training duration?
So are things like AIXI-tl, Hutter-search, Gödel machine, and so on. Yet I would not consider any of them as the “foundational aspect” of intelligence.
The foundation of intelligence is just inference—simply because universal inference is sufficient to solve any other problem. AIXI is already simple, but you can make it even simpler by replacing the planning component with inference over high EV actions, or even just inference over program space to learn approx planning.
So it all boils down to efficient inference. The new exciting progress in DL—for me at least—is in understanding how successful empirical optimization techniques can be derived as approx inference update schemes with various types of priors. This is what I referred to as new and upcoming “Bayesian methods”—bayesian grounded DL.
Yes, but only because “ANN” is enormously broad (tensor/linear algebra program space), and basically includes all possible routes to AGI (all possible approximations of bayesian inference).
“Enormously broad” is just another way of saying “not very useful”. We don’t even know in which sense (if any) the “deep networks” that are used in practice may be said to approximate Bayesian inference; the best we can do, AIUI, is make up a hand-wavy story about how they must be some “hierarchical” variation of single-layer networks, i.e. generalized linear models.
Specifically I meant approx bayesian inference over the tensor program space to learn the ANN, not that the ANN itself needs to implement bayesian inference (although they will naturally tend to learn that, as we see in all the evidence for various bayesian ops in the brain) .
I agree. I don’t find this result to be any more or less indicative of near-term AI than Google’s success on ImageNet in 2012. The algorithm learns to map positions to moves and values using CNNs, just as CNNs can be used to learn mappings from images to 350 classes of dog breeds and more. It turns out that Go really is a game about pattern recognition and that with a lot of data you can replicate the pattern detection for good moves in very supervised ways (one could call their reinforcement learning actually supervised because the nature of the problem gives you credit assignment for free).
I think what this result says is thus: “Any tasks humans can do, an AI can now learn to do better, given a sufficient source of training data.”
Games lend themselves to auto-generation of training data, in the sense that the AI can at the very least play against itself. No matter how complex the game, a deep neural net will find the structure in it, and find a deeper structure than human players can find.
We have now answered the question of, “Are deep neural nets going to be sufficient to match or exceed task-specific human performance at any well-specified task?” with “Yes, they can, and they can do it better and faster than we suspected.” The next hurdle—which all the major companies are working on—is to create architectures that can find structure in smaller datasets, less well-tailored training data, and less well-specified tasks.
I included the word “sufficient” as an ass-covering move, because one facet of the problem is we don’t really know what will serve as a “sufficient” amount of training data in what context.
But, what specific types of tasks do you think machines still can’t do, given sufficient training data? If your answer is something like “physics research,” I would rejoinder that if you could generate training data for that job, a machine could do it.
Grand pronouncements with an ass-covering move look silly :-)
One obvious problem is that you are assuming stability. Consider modeling something that changes (in complex ways) with time—like the economy of the United States. Is “training data” from the 1950s relevant to the currrent situation?
Generally speaking, the speed at which your “training data” gets stale puts an upper limit on the relevant data that you can possibly have and that, in turn, puts an upper limit on the complexity of the model (NNs included) that you can build on its basis.
I don’t see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We’ve seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?
I think what this result says is thus: “Any tasks humans can do, an AI can now learn to do better, given a sufficient source of training data.”
Yes, but that would likely require an extremely large amount of training data because to prepare actions for many kind of situations you’d have an exponential blow up to cover many combinations of many possibilities, and hence the model would need to be huge as well. It also would require high-quality data sets with simple correction signals in order to work, which are expensive to produce.
I think, above all for building a real-time AI you need reuse of concepts so that abstractions can be recombined and adapted to new situations; and for concept-based predictions (reasoning) you need one-shot learning so that trains of thoughts can be memorized and built upon. In addition, the entire network needs to learn somehow to determine which parts of the network in the past were responsible for current reward signals which are delayed and noisy. If there is a simple and fast solutions to this, then AGI could be right around the corner. If not, it could take several decades of research.
In addition, the entire network needs to learn somehow to determine which parts of the network in the past were responsible for current reward signals which are delayed and noisy.
This is a well-known problem, called reinforcement learning. It is a significant component in the reported results. (What happens in practice is that a network’s ability to assign “credit” or “blame” for reward signals falls off exponentially with increasing delay. This is a significant limitation, but reinforcement learning is nevertheless very helpful given tight feedback loops.)
Yes, but as I wrote above, the problems of credit assignment, reward delay and noise are non-existent in this setting, and hence their work does not contribute at all to solving AI.
Reward delay is not very significant in this task, since the task is episodic and fully observable, and there is no time preference, thus you can just play a game to completion without updating and then assign the final reward to all the positions.
In more general reinforcement learning settings, where you want to update your policy during the execution, you have to use some kind of temporal difference learning method, which is further complicated if the world states are not fully observable.
Credit assignment is taken care of by backpropagation, as usual in neural networks. I don’t know why RaelwayScot brought it up, unless they meant something else.
I meant that for AI we will possibly require high-level credit assignment, e.g. experiences of regret like “I should be more careful in these kinds of situations”, or the realization that one particular strategy out of the entire sequence of moves worked out really nicely. Instead it penalizes/enforces all moves of one game equally, which is potentially a much slower learning process. It turns out playing Go can be solved without much structure for the credit assignment processes, hence I said the problem is non-existent, i.e. there wasn’t even need to consider it and further our understanding of RL techniques.
thus you can just play a game to completion without updating and then assign the final reward to all the positions.
Agreed, with the caveat that this is a stochastic object, and thus not a fully simple problem. (Even if I knew all possible branches of the game tree that originated in a particular state, I would need to know how likely any of those branches are to be realized in order to determine the current value of that state.)
Even if I knew all possible branches of the game tree that originated in a particular state, I would need to know how likely any of those branches are to be realized in order to determine the current value of that state.
Well, the value of a state is defined assuming that the optimal policy is used for all the following actions. For tabular RL you can actually prove that the updates converge to the optimal value function/policy function (under some conditions). If NN are used you don’t have any convergence guarantees, but in practice the people at DeepMind are able to make it work, and this particular scenario (perfect observability, determinism and short episodes) is simpler than, for instance that of the Atari DQN agent.
“Nonexistent problems” was meant as a hyperbole to say that they weren’t solved in interesting ways and are extremely simple in this setting because the states and rewards are noise-free. I am not sure what you mean by the second question. They just apply gradient descent on the entire history of moves of the current game such that expected reward is maximized.
It seems to me that the problem of value assignment to boards—”What’s the edge for W or B if the game state looks like this?” is basically a solution to that problem, since it gives you the counterfactual information you need (how much would placing a stone here improve my edge?) to answer those questions.
I agree that it’s a much simpler problem here than it is in a more complicated world, but I don’t think it’s trivial.
There are other big deals. The MS ImageNet win also contained frightening progress on the training meta level.
The other issue is that constructing this kind of mega-neural net is tremendously difficult. Landing on a particular set of algorithms—determining how each layer should operate and how it should talk to the next layer—is an almost epic task. But Microsoft has a trick here, too. It has designed a computing system that can help build these networks.
As Jian Sun explains it, researchers can identify a promising arrangement for massive neural networks, and then the system can cycle through a range of similar possibilities until it settles on this best one. “In most cases, after a number of tries, the researchers learn [something], reflect, and make a new decision on the next try,” he says. “You can view this as ‘human-assisted search.’”
Going by that description, it is much much less important than residual learning, because hyperparameter optimization is not new. There are a lot of approaches: grid search, random search, Gaussian processes. Some hyperparameter optimizations baked into MSR’s deep learning framework would save some researcher time and effort, certainly, but I don’t know that it would’ve made any big difference unless they have something quite unusual going one.
(I liked one paper which took a Bayesian multi-armed bandit approach and treated error curves as partial information about final performance, and it would switch between different networks being trained based on performance, regularly ‘freezing’ and ‘thawing’ networks as the probability each network would become the best performer changed with information from additional mini-batches/epoches.) Probably the single coolest one is last year some researchers showed that it is possible to somewhat efficiently backpropagate on hyperparameters! So hyperparameters just become more parameters to learn, and you can load up on all sorts of stuff without worrying about it making your hyperparameter optimization futile or having to train a billion times, and would both save people a lot of time (for using vanilla networks) and allow exploring extremely complicated and heavily parameterized families of architectures, and would be a big deal. Unfortunately, it’s still not efficient enough for the giant networks we want to train. :(
A step which was taken a long time ago and does not seem to have played much of a role in recent developments; for the most part, people don’t bother with extensive hyperparameter tuning. Better initialization, better algorithms like dropout or residual learning, better architectures, but not hyperparameters.
Eliezer thinks it’s a big deal.
Thanks. Key quote:
His argument proves too much.
You could easily transpose it for the time when Checkers or Chess programs beat professional players: back then the “keystone, foundational aspect” of intelligence was thought to be the ability to do combinatorial search in large solution spaces, and scaling up to AGI was “just” a matter of engineering better heuristics. Sure, it didn’t work on Go yet, but Go players were not using a different cortical algorithm than Chess players, were they?
Or you could transpose it for the time when MCTS Go programs reached “dan” (advanced amateur) level. They still couldn’t beat professional players, but professional players were not using a different cortical algorithm than advanced amateur players, were they?
AlphaGo succeded at the current achievement by using artificial neural networks in a regime where they are know to do well. But this regime, and the type of games like Go, Chess, Checkers, Othello, etc. represent a small part of the range of human cognitive tasks. In fact, we probably find this kind of board games fascinating precisely because they are very different than the usual cognitive stimuli we deal with in everyday life.
It’s tempting to assume that the “keystone, foundational aspect” of intelligence is learning essentially the same way that artificial neural networks learn. But humans can do things like “one-shot” learning, learning from weak supervision, learning in non-stationary environments, etc. which no current neural network can do, and not just because a matter of scale or architectural “details”. Researchers generally don’t know how to make neural networks, or really any other kind of machine learning algorithm, do these things, except with massive task-specific engineering. Thus I think it’s fair to say that we still don’t know what the foundational aspects of intelligence are.
In the brain, the same circuitry that is used to solve vision is used to solve most of the rest of cognition—vision is 10% of the cortex. Going from superhuman vision to superhuman Go suggests superhuman anything/everything is getting near.
The reason being that strong Go requires both deep slow inference over huge data/time (which DL excels in, similar to what the cortex/cerebellum specialize in), combined with fast/low data inference (the MCTS part here). There is still much room for improvement in generalizing beyond current MCTS techniques, and better integration into larger scale ANNs, but that is increasingly looking straightforward.
Yes, but only because “ANN” is enormously broad (tensor/linear algebra program space), and basically includes all possible routes to AGI (all possible approximations of bayesian inference).
Bayesian methods excel at one shot learning, and are steadily integrating themselves into ANN techniques (providing the foundation needed to derive new learning and inference rules). Progress in transfer and semi-supervised learning is also progressing rapidly and the theory is all there. I don’t know about non-stationary as much, but I’d be pretty surprised if there wasn’t progress there as well.
LOL. Generalized DL + MCTS is—rather obviously—a practical approximation of universal intelligence like AIXI. I doubt MCTS scales to all domains well enough, but the obvious next step is for DL to eat MCTS techniques (so that super new complex heuristic search techniques can be learned automatically).
And in a laptop the same circuitry that it is used to run a spreadsheet is used to play a video game.
Systems that are Turing-complete (in the limit of infinite resources) tend to have an independence between hardware and possibly many layers of software (program running on VM running on VM running on VM and so on). Things that look similar at a some levels may have lots of difference at other levels, and thus things that look simple at some levels can have lots of hidden complexity at other levels.
Human-level (perhaps weakly superhuman) vision is achieved only in very specific tasks where large supervised datasets are available. This is not very surprising, since even traditional “hand-coded” computer vision could achieve superhuman performances in some narrow and clearly specified tasks.
Again, ANN are Turing-complete, therefore in principle they include literally everything, but so does the brute-force search of C programs.
In practice if you try to generate C programs by brute-force search you will get stuck pretty fast, while ANN with gradient descent training empirically work well on various kinds of practical problems, but not on all kinds practical problems that humans are good at, and how to make them work on these problems, if it even efficiently possible, is a whole open research field.
With lots of task-specific engineering.
So are things like AIXI-tl, Hutter-search, Gödel machine, and so on. Yet I would not consider any of them as the “foundational aspect” of intelligence.
Exactly, and this a good analogy to illustrate my point. Discovering that the cortical circuitry is universal vs task-specific (like an ASIC) was a key discovery.
Note I didn’t say that we have solved vision to superhuman level, but this is simply not true. Current SOTA nets can achieve human-level performance in at least some domains using modest amounts of unsupervised data combined with small amounts of supervised data.
Human vision builds on enormous amounts of unsupervised data—much larger than ImageNet. Learning in the brain is complex and multi-objective, but perhaps best described as self-supervised (unsupervised meta-learning of sub-objective functions which then can be used for supervised learning).
A five year old will have experienced perhaps 50 million seconds worth of video data. Imagenet consists of 1 million images, which is vaguely equivalent to 1 million seconds of video if we include 30x amplification for small translations/rotations.
The brain’s vision system is about 100x larger than current ‘large’ vision ANNs. But If deepmind decided to spend the cash on that and make it a huge one off research priority, do you really doubt that they could build a superhuman general vision system that learns with a similar dataset and training duration?
The foundation of intelligence is just inference—simply because universal inference is sufficient to solve any other problem. AIXI is already simple, but you can make it even simpler by replacing the planning component with inference over high EV actions, or even just inference over program space to learn approx planning.
So it all boils down to efficient inference. The new exciting progress in DL—for me at least—is in understanding how successful empirical optimization techniques can be derived as approx inference update schemes with various types of priors. This is what I referred to as new and upcoming “Bayesian methods”—bayesian grounded DL.
“Enormously broad” is just another way of saying “not very useful”. We don’t even know in which sense (if any) the “deep networks” that are used in practice may be said to approximate Bayesian inference; the best we can do, AIUI, is make up a hand-wavy story about how they must be some “hierarchical” variation of single-layer networks, i.e. generalized linear models.
Specifically I meant approx bayesian inference over the tensor program space to learn the ANN, not that the ANN itself needs to implement bayesian inference (although they will naturally tend to learn that, as we see in all the evidence for various bayesian ops in the brain) .
I agree. I don’t find this result to be any more or less indicative of near-term AI than Google’s success on ImageNet in 2012. The algorithm learns to map positions to moves and values using CNNs, just as CNNs can be used to learn mappings from images to 350 classes of dog breeds and more. It turns out that Go really is a game about pattern recognition and that with a lot of data you can replicate the pattern detection for good moves in very supervised ways (one could call their reinforcement learning actually supervised because the nature of the problem gives you credit assignment for free).
I think what this result says is thus: “Any tasks humans can do, an AI can now learn to do better, given a sufficient source of training data.”
Games lend themselves to auto-generation of training data, in the sense that the AI can at the very least play against itself. No matter how complex the game, a deep neural net will find the structure in it, and find a deeper structure than human players can find.
We have now answered the question of, “Are deep neural nets going to be sufficient to match or exceed task-specific human performance at any well-specified task?” with “Yes, they can, and they can do it better and faster than we suspected.” The next hurdle—which all the major companies are working on—is to create architectures that can find structure in smaller datasets, less well-tailored training data, and less well-specified tasks.
I don’t think it says anything like that.
I included the word “sufficient” as an ass-covering move, because one facet of the problem is we don’t really know what will serve as a “sufficient” amount of training data in what context.
But, what specific types of tasks do you think machines still can’t do, given sufficient training data? If your answer is something like “physics research,” I would rejoinder that if you could generate training data for that job, a machine could do it.
Grand pronouncements with an ass-covering move look silly :-)
One obvious problem is that you are assuming stability. Consider modeling something that changes (in complex ways) with time—like the economy of the United States. Is “training data” from the 1950s relevant to the currrent situation?
Generally speaking, the speed at which your “training data” gets stale puts an upper limit on the relevant data that you can possibly have and that, in turn, puts an upper limit on the complexity of the model (NNs included) that you can build on its basis.
I don’t see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We’ve seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?
Yes, but that would likely require an extremely large amount of training data because to prepare actions for many kind of situations you’d have an exponential blow up to cover many combinations of many possibilities, and hence the model would need to be huge as well. It also would require high-quality data sets with simple correction signals in order to work, which are expensive to produce.
I think, above all for building a real-time AI you need reuse of concepts so that abstractions can be recombined and adapted to new situations; and for concept-based predictions (reasoning) you need one-shot learning so that trains of thoughts can be memorized and built upon. In addition, the entire network needs to learn somehow to determine which parts of the network in the past were responsible for current reward signals which are delayed and noisy. If there is a simple and fast solutions to this, then AGI could be right around the corner. If not, it could take several decades of research.
This is a well-known problem, called reinforcement learning. It is a significant component in the reported results. (What happens in practice is that a network’s ability to assign “credit” or “blame” for reward signals falls off exponentially with increasing delay. This is a significant limitation, but reinforcement learning is nevertheless very helpful given tight feedback loops.)
Yes, but as I wrote above, the problems of credit assignment, reward delay and noise are non-existent in this setting, and hence their work does not contribute at all to solving AI.
Credit assignment and reward delay are nonexistent? What do you think happens when one diffs the board strength of two potential boards?
Reward delay is not very significant in this task, since the task is episodic and fully observable, and there is no time preference, thus you can just play a game to completion without updating and then assign the final reward to all the positions.
In more general reinforcement learning settings, where you want to update your policy during the execution, you have to use some kind of temporal difference learning method, which is further complicated if the world states are not fully observable.
Credit assignment is taken care of by backpropagation, as usual in neural networks. I don’t know why RaelwayScot brought it up, unless they meant something else.
I meant that for AI we will possibly require high-level credit assignment, e.g. experiences of regret like “I should be more careful in these kinds of situations”, or the realization that one particular strategy out of the entire sequence of moves worked out really nicely. Instead it penalizes/enforces all moves of one game equally, which is potentially a much slower learning process. It turns out playing Go can be solved without much structure for the credit assignment processes, hence I said the problem is non-existent, i.e. there wasn’t even need to consider it and further our understanding of RL techniques.
Agreed, with the caveat that this is a stochastic object, and thus not a fully simple problem. (Even if I knew all possible branches of the game tree that originated in a particular state, I would need to know how likely any of those branches are to be realized in order to determine the current value of that state.)
Well, the value of a state is defined assuming that the optimal policy is used for all the following actions. For tabular RL you can actually prove that the updates converge to the optimal value function/policy function (under some conditions). If NN are used you don’t have any convergence guarantees, but in practice the people at DeepMind are able to make it work, and this particular scenario (perfect observability, determinism and short episodes) is simpler than, for instance that of the Atari DQN agent.
“Nonexistent problems” was meant as a hyperbole to say that they weren’t solved in interesting ways and are extremely simple in this setting because the states and rewards are noise-free. I am not sure what you mean by the second question. They just apply gradient descent on the entire history of moves of the current game such that expected reward is maximized.
It seems to me that the problem of value assignment to boards—”What’s the edge for W or B if the game state looks like this?” is basically a solution to that problem, since it gives you the counterfactual information you need (how much would placing a stone here improve my edge?) to answer those questions.
I agree that it’s a much simpler problem here than it is in a more complicated world, but I don’t think it’s trivial.
Man, I wouldn’t bother. EY has spoken, we are done here.
There are other big deals. The MS ImageNet win also contained frightening progress on the training meta level.
-- extracted from very readable summary at wired: http://www.wired.com/2016/01/microsoft-neural-net-shows-deep-learning-can-get-way-deeper/
Going by that description, it is much much less important than residual learning, because hyperparameter optimization is not new. There are a lot of approaches: grid search, random search, Gaussian processes. Some hyperparameter optimizations baked into MSR’s deep learning framework would save some researcher time and effort, certainly, but I don’t know that it would’ve made any big difference unless they have something quite unusual going one.
(I liked one paper which took a Bayesian multi-armed bandit approach and treated error curves as partial information about final performance, and it would switch between different networks being trained based on performance, regularly ‘freezing’ and ‘thawing’ networks as the probability each network would become the best performer changed with information from additional mini-batches/epoches.) Probably the single coolest one is last year some researchers showed that it is possible to somewhat efficiently backpropagate on hyperparameters! So hyperparameters just become more parameters to learn, and you can load up on all sorts of stuff without worrying about it making your hyperparameter optimization futile or having to train a billion times, and would both save people a lot of time (for using vanilla networks) and allow exploring extremely complicated and heavily parameterized families of architectures, and would be a big deal. Unfortunately, it’s still not efficient enough for the giant networks we want to train. :(
The key point is that machine learning starts to happen at the hyper-parameter level. Which is one more step toward systems that optimize themselves.
A step which was taken a long time ago and does not seem to have played much of a role in recent developments; for the most part, people don’t bother with extensive hyperparameter tuning. Better initialization, better algorithms like dropout or residual learning, better architectures, but not hyperparameters.