So these results are not reported in “Multitask Prompted Training Enables Zero-Shot Task Generalization”, Sanh et al 2021?
For Sanh et al. (2021), we were able to negotiate access to preliminary numbers from the BIG Bench project and run the T0 models on it. However the authors of Sanh et al. and the authors of BIG Bench are different groups of people.
The aforementioned Google’s Big-Bench paper is now publicly available:
some highlights
the paper
Nope. Although the linked paper uses the same benchmark (a tiny subset of it), the paper comes from a separate research project.
As I understand, the primary topic of the future paper will be the BIG-bench project itself, and how the models from Google / OpenAI perform on it.
So these results are not reported in “Multitask Prompted Training Enables Zero-Shot Task Generalization”, Sanh et al 2021?
For Sanh et al. (2021), we were able to negotiate access to preliminary numbers from the BIG Bench project and run the T0 models on it. However the authors of Sanh et al. and the authors of BIG Bench are different groups of people.
The aforementioned Google’s Big-Bench paper is now publicly available:
some highlights
the paper
Nope. Although the linked paper uses the same benchmark (a tiny subset of it), the paper comes from a separate research project.
As I understand, the primary topic of the future paper will be the BIG-bench project itself, and how the models from Google / OpenAI perform on it.