This answer likely betrays my lack of imagination, but I’m not sure what Google would use GPT-3 for. It’s probably much more expensive than whatever gmail uses to predict text, and the additional accuracy might not provide much additional value.
Maybe they could sell it as a service, as part of GCP? I’m not sure how many people inside Google have the ability to sign $15M checks, you would need at least one of them to believe in a large market, and I’m personally not sure there’s a large enough market for GPT-3 for it to be worth Google’s time.
This is all to say, I don’t think you should draw the conclusion that Google is either stupid or hiding something. They’re likely focusing on finding better architectures, it seems a little early to focus on scaling up existing ones.
GPT-3 based text embedding should be extremely useful for creating summaries of arbitrary text (such as, web pages or ad text) which can be fed into the existing Google search/ad infrastructure. (The API already has a less-known half, where you upload sets of docs and GPT-3 searches them.) Of course, they already surely use NNs for embeddings, but at Google scale, enhanced embeddings ought to be worth billions.
I think the OP and my comment suggest that scaling current models 10000x could lead to AGI or at least something close to it. If that is true, it doesn’t make sense to focus on finding better architectures right now.
This answer likely betrays my lack of imagination, but I’m not sure what Google would use GPT-3 for. It’s probably much more expensive than whatever gmail uses to predict text, and the additional accuracy might not provide much additional value.
Maybe they could sell it as a service, as part of GCP? I’m not sure how many people inside Google have the ability to sign $15M checks, you would need at least one of them to believe in a large market, and I’m personally not sure there’s a large enough market for GPT-3 for it to be worth Google’s time.
This is all to say, I don’t think you should draw the conclusion that Google is either stupid or hiding something. They’re likely focusing on finding better architectures, it seems a little early to focus on scaling up existing ones.
Text embeddings for knowledge graphs and ads is the most immediately obvious big bucks application.
Can you explain more?
GPT-3 based text embedding should be extremely useful for creating summaries of arbitrary text (such as, web pages or ad text) which can be fed into the existing Google search/ad infrastructure. (The API already has a less-known half, where you upload sets of docs and GPT-3 searches them.) Of course, they already surely use NNs for embeddings, but at Google scale, enhanced embeddings ought to be worth billions.
Worth noting that they already use BERT in Search. https://blog.google/products/search/search-language-understanding-bert/
I think the OP and my comment suggest that scaling current models 10000x could lead to AGI or at least something close to it. If that is true, it doesn’t make sense to focus on finding better architectures right now.
Minor note: could people include commas in their Big Numbers, to make it easier to distinguish 1000 from 10,000 at a glance?
Sounds like something GPT-3 would say...