A relevant paper came out 3 days ago talking about how AlphaGo used Bayesian hyperparameter optimization and how that improved performance: https://arxiv.org/pdf/1812.06855v1.pdf
It’s interesting to set the OpenAI compute article’s graph to linear scale so you can see that the compute that went into AlphaGo utterly dwarfs everything else. It seems like DeepMind is definitely ahead of nearly everyone else on the engineering effort and money they’ve put into scaling.
A relevant paper came out 3 days ago talking about how AlphaGo used Bayesian hyperparameter optimization and how that improved performance: https://arxiv.org/pdf/1812.06855v1.pdf
It’s interesting to set the OpenAI compute article’s graph to linear scale so you can see that the compute that went into AlphaGo utterly dwarfs everything else. It seems like DeepMind is definitely ahead of nearly everyone else on the engineering effort and money they’ve put into scaling.