Yep, I agree that MMLU and Swag aren’t alignment benchmarks. I was using them as examples of “Want to test your models ability at X? Then use the standard X benchmark!” I’ll clarify in the text.
They tested toxicity (among other things) with their “safety prompts”, but we do have standard benchmarks for toxicity.
They could have turned their safety prompts into a new benchmark if they had ran the same test on the other LLMs! This would’ve taken, idk, 2–5 hrs of labour?
>They could have turned their safety prompts into a new benchmark if they had ran the same test on the other LLMs! This would’ve taken, idk, 2–5 hrs of labour?
I’m not sure I understand what you mean by this. They ran the same prompts with all the LLMs, right? (That’s what Figure 1 is...) Do you mean they should have tried the finetuning on the other LLMs as well? (I’ve only read your post, not the actual paper.) And how does this relate to turning their prompts into a new benchmarks?
Yep, I agree that MMLU and Swag aren’t alignment benchmarks. I was using them as examples of “Want to test your models ability at X? Then use the standard X benchmark!” I’ll clarify in the text.
They tested toxicity (among other things) with their “safety prompts”, but we do have standard benchmarks for toxicity.
They could have turned their safety prompts into a new benchmark if they had ran the same test on the other LLMs! This would’ve taken, idk, 2–5 hrs of labour?
The best MMLU-like benchmark test for alignment-proper is https://github.com/anthropics/evals which is used in Anthropic’s Discovering Language Model Behaviors with Model-Written Evaluations. See here for a visualisation. Unfortunately, this benchmark was published by an Anthropic which makes it unlikely that competitors will use it (esp. MetaAI).
>They could have turned their safety prompts into a new benchmark if they had ran the same test on the other LLMs! This would’ve taken, idk, 2–5 hrs of labour?
I’m not sure I understand what you mean by this. They ran the same prompts with all the LLMs, right? (That’s what Figure 1 is...) Do you mean they should have tried the finetuning on the other LLMs as well? (I’ve only read your post, not the actual paper.) And how does this relate to turning their prompts into a new benchmarks?
Sorry for any confusion. Meta only tested LIMA on their 30 safety prompts, not the other LLMs.
Figure 1 does not show the results from the 30 safety prompts, but instead the results of human evaluations on the 300 test prompts.