Are Google, Facebook, and Deepmind currently working on GPT-like transformers? I would’ve thought that GPT-2 would show enough potential that they’d be working on better models of that class, but it’s been two and a half years and isn’t GPT-3 the only improvement there? (Not a rhetorical question, I wasn’t reading about new advances back then.) If yes, that makes me think several other multimodal transformers similar in size to GPT-3 would be further away than 2022, probably.
Are Google, Facebook, and Deepmind currently working on GPT-like transformers? I would’ve thought that GPT-2 would show enough potential that they’d be working on better models of that class, but it’s been two and a half years and isn’t GPT-3 the only improvement there? (Not a rhetorical question, I wasn’t reading about new advances back then.) If yes, that makes me think several other multimodal transformers similar in size to GPT-3 would be further away than 2022, probably.
We’ll see! :)