It might be more useful to discuss Google’s dense GPT-like LaMDA-137b instead, because there’s so little information about Pathways or MUM. (We also know relatively little about the Wu Dao series of competing multimodal sparse models.) Google papers refuse to name it when they use LaMDA, for unclear reasons (it’s not like they’re fooling anyone), but they’ve been doing interesting OA-like research with it: eg “Program Synthesis with Large Language Models”, “Finetuned Language Models Are Zero-Shot Learners”, or text style transfer.
It might be more useful to discuss Google’s dense GPT-like LaMDA-137b instead, because there’s so little information about Pathways or MUM. (We also know relatively little about the Wu Dao series of competing multimodal sparse models.) Google papers refuse to name it when they use LaMDA, for unclear reasons (it’s not like they’re fooling anyone), but they’ve been doing interesting OA-like research with it: eg “Program Synthesis with Large Language Models”, “Finetuned Language Models Are Zero-Shot Learners”, or text style transfer.