First, they are solving real world problems. But as usual, companies talk a lot more about the research than the trade secrets. Google uses it heavily, even for the crown jewel of Search. The Deepmind post yesterday mentions DQN is being used for recommender systems internally; I had never seen that mentioned anywhere before and I don’t know how DQN would even work for that (if you treat every eg YT video as a different ‘action’ whose Q-value is being estimated, that can’t possibly scale; but I’m not sure how else recommending a particular video could be encoded into the DQN architecture). Google Translate will be, is, or has already been rolling out the encoder-decoder RNN framework delivering way better translation quality (media reports and mentions in talks make it hard for me to figure out what exactly). The TensorFlow promotional materials mention in passing that TF and trained models are being used by something like 500 non-research groups inside Google (what for? of course, they don’t say). Google is already rolling out deep learning as a cloud service in beta to make better use of all their existing infrastructure like TPU. Deepmind recently managed to optimize Google’s already hyper-optimized data centers to reduce cooling electricity consumption by 40% (!) but we’re still waiting on the paper to be published to see the details. The recent Facebook piece quotes them as saying that FB considers their two AI labs to have already paid for themselves many times over (how?); their puff piece blog about their text framework implies that it’s being used all over Facebook in a myriad of ways (which don’t get explained). Baidu is using their RNN work for voice recognition on smartphones in the Chinese market, apparently with a lot of success; given the language barrier there and Baidu’s similarly comprehensive aspect as Google and Facebook, they are doubtless using NNs for many other things. Tesla’s already (somewhat recklessly) rolling out self-driving cars; powered by Mobileye doesn’t use a pure end-to-end CNN framework like Geohotz and some others, but they do acknowledge using NNs in their pipelines and are actively producing NN research. People involved say that companies are spending staggering sums.
Second, in the initial stages of a Singularity, why would you expect a systematic bias towards all the initial results being reported as deployed commercial services with no known academic precedents? I would expect it the other way around: even when corporations do groundbreaking R&D, it’s more typical for it to be published first and then start having real-world effects. (eg Bell Labs—things like Unix were written about long before AT&T started selling it commercially.)
First, they are solving real world problems. But as usual, companies talk a lot more about the research than the trade secrets. Google uses it heavily, even for the crown jewel of Search. The Deepmind post yesterday mentions DQN is being used for recommender systems internally; I had never seen that mentioned anywhere before and I don’t know how DQN would even work for that (if you treat every eg YT video as a different ‘action’ whose Q-value is being estimated, that can’t possibly scale; but I’m not sure how else recommending a particular video could be encoded into the DQN architecture). Google Translate will be, is, or has already been rolling out the encoder-decoder RNN framework delivering way better translation quality (media reports and mentions in talks make it hard for me to figure out what exactly). The TensorFlow promotional materials mention in passing that TF and trained models are being used by something like 500 non-research groups inside Google (what for? of course, they don’t say). Google is already rolling out deep learning as a cloud service in beta to make better use of all their existing infrastructure like TPU. Deepmind recently managed to optimize Google’s already hyper-optimized data centers to reduce cooling electricity consumption by 40% (!) but we’re still waiting on the paper to be published to see the details. The recent Facebook piece quotes them as saying that FB considers their two AI labs to have already paid for themselves many times over (how?); their puff piece blog about their text framework implies that it’s being used all over Facebook in a myriad of ways (which don’t get explained). Baidu is using their RNN work for voice recognition on smartphones in the Chinese market, apparently with a lot of success; given the language barrier there and Baidu’s similarly comprehensive aspect as Google and Facebook, they are doubtless using NNs for many other things. Tesla’s already (somewhat recklessly) rolling out self-driving cars; powered by Mobileye doesn’t use a pure end-to-end CNN framework like Geohotz and some others, but they do acknowledge using NNs in their pipelines and are actively producing NN research. People involved say that companies are spending staggering sums.
Second, in the initial stages of a Singularity, why would you expect a systematic bias towards all the initial results being reported as deployed commercial services with no known academic precedents? I would expect it the other way around: even when corporations do groundbreaking R&D, it’s more typical for it to be published first and then start having real-world effects. (eg Bell Labs—things like Unix were written about long before AT&T started selling it commercially.)