Google published a paper about LaMDA, which (at a glance) mentions nothing about AlphaStar or any RL. It does talk about some sort of supervised fine-tuning and queries to “external knowledge resources and tools.”
The retrieval system is pretty interesting: https://arxiv.org/pdf/2201.08239.pdf#subsection.6.2 It includes a translator and a calculator, which apparently get run automatically if LaMDA generates appropriately-formatted text, and then the rest is somewhat WebGPT-style web browsing for answers—the WebGPT paper explicitly states it’s a canned offline pre-generated dataset of text (even if using live web pages would be easy), but the LaMDA paper is unclear about whether it does the same thing, because it says things like
The information retrieval system is also capable of returning snippets of content from the open web, with their corresponding URLs. The TS tries an input string on all of its tools, and produces a final output list of strings by concatenating the output lists from every tool in the following order: calculator, translator, and information retrieval system. A tool will return an empty list of results if it can’t parse the input (e.g., the calculator cannot parse “How old is Rafael Nadal?”), and therefore does not contribute to the final output list.
Inasmuch as the other two tools in this are run autonomously by external tools (rather than simply calling LaMDA with a specialized prompt or something), there’s a heavy emphasis on inclusion of accurate live URLs, there’s no mention of using a static pregenerated cache, and the human workers who are generating/rating/editing LaMDA queries explicitly do use live URLs, it seems like LaMDA is empowered to do (arbitrary?) HTTP GETs to retrieve live URLs ‘from the open web’.
(This is all RL, of course: offline imitation & preference learning of how to use non-differentiable actions like calling a calculator or translator, with a single bootstrap phase, at a minimum. You even have expert agents editing LaMDA trajectories to be better demonstration trajectories to increase reward for blackbox environments like chatbots.)
The paper could use more detail on how querying external knowledge resources works. Nevertheless, in the paper, they just add information for various queries to the input string. Example:
LaMDA ta user: Hi, how can I help you today? <EOS> [...] user to LaMDA: When was the Eiffel Tower built? <EOS> LaMDA-Base to LaMDA-Research: It was constructed in 1887.<EOS>
Retraining in the middle of a conversation seems to be well beyond what is documented in the 2201.08239 paper.
Google published a paper about LaMDA, which (at a glance) mentions nothing about AlphaStar or any RL. It does talk about some sort of supervised fine-tuning and queries to “external knowledge resources and tools.”
The retrieval system is pretty interesting: https://arxiv.org/pdf/2201.08239.pdf#subsection.6.2 It includes a translator and a calculator, which apparently get run automatically if LaMDA generates appropriately-formatted text, and then the rest is somewhat WebGPT-style web browsing for answers—the WebGPT paper explicitly states it’s a canned offline pre-generated dataset of text (even if using live web pages would be easy), but the LaMDA paper is unclear about whether it does the same thing, because it says things like
Inasmuch as the other two tools in this are run autonomously by external tools (rather than simply calling LaMDA with a specialized prompt or something), there’s a heavy emphasis on inclusion of accurate live URLs, there’s no mention of using a static pregenerated cache, and the human workers who are generating/rating/editing LaMDA queries explicitly do use live URLs, it seems like LaMDA is empowered to do (arbitrary?) HTTP GETs to retrieve live URLs ‘from the open web’.
(This is all RL, of course: offline imitation & preference learning of how to use non-differentiable actions like calling a calculator or translator, with a single bootstrap phase, at a minimum. You even have expert agents editing LaMDA trajectories to be better demonstration trajectories to increase reward for blackbox environments like chatbots.)
The paper could use more detail on how querying external knowledge resources works. Nevertheless, in the paper, they just add information for various queries to the input string. Example:
LaMDA ta user: Hi, how can I help you today? <EOS> [...]
user to LaMDA: When was the Eiffel Tower built? <EOS>
LaMDA-Base to LaMDA-Research: It was constructed in 1887.<EOS>
Retraining in the middle of a conversation seems to be well beyond what is documented in the 2201.08239 paper.