AlphaGo uses two deep neural networks to prune the enormous search tree of a Go position, and it does so unsupervised.
lol no. The pruning (‘policy’) network is entirely the result of supervised learning from human games. The other network is used to evaluate game states.
Your other ideas are more interesting, but they are not related to AlphaGo specifically, just deep neural networks.
lol no. The pruning (‘policy’) network is entirely the result of supervised learning from human games.
If I understood correctly, this is only the first stage in the training of the policy network. Then (quoting from Nature):
The second stage of the training pipeline aims at improving the policy network by policy gradient
reinforcement learning (RL). The RL policy network pρ is identical in structure to the SL
policy network, and its weights ρ are initialised to the same values, ρ = σ. We play games
between the current policy network pρ and a randomly selected previous iteration of the policy
network.
lol no. The pruning (‘policy’) network is entirely the result of supervised learning from human games. The other network is used to evaluate game states.
Your other ideas are more interesting, but they are not related to AlphaGo specifically, just deep neural networks.
If I understood correctly, this is only the first stage in the training of the policy network. Then (quoting from Nature):
Except that they don’t seem to use the resulting network in actual play; the only use is for deriving their state-evaluation network.