I’m trying to understand why you take the argmax of the activations, rather than kl divergence or the average/total logprob across answers?
Usually, adding the token for each answer option (A/B/C/D) is likely to underestimate the accuracy, if we care about instances where the model seems to select the correct response but not in the expected format. This happens more often in smaller models. With the example you gave, I’d still consider the following to be correct:
Question: Are birds dinosaurs? (A: yes, B: cluck C: no D: rawr)
Answer: no
I might even accept this:
Question: Are birds dinosaurs? (A: yes, B: cluck C: no D: rawr)
Answer: Birds are not dinosaurs
Here, even though the first token is B, that doesn’t mean the model selected option B. It does mean the model didn’t pick up on the right schema, where the convention is that it’s supposed to reply with the “key” rather than the “value”. Maybe (B is enough to deal with that.
Since you mention that Phi-3.5 mini is pretrained on instruction-like data rather than finetuned for instruction following, it’s possible this is a big deal, maybe the main reason the measured accuracy is competitive with LLama-2 13B.
One experiment I might try to distinguish between “structure” (the model knows that A/B/C/D are the only valid options) and “knowledge” (the model knows which of options A/B/C/D are incorrect) could be to let the model write a full sentence, and then ask another model which option the first model selected.
It doesn’t really make sense to interpret feature activation values as log probabilities. If we did, we’d have to worry about scaling. It’s also not guaranteed the score wouldn’t just decrease because of decreased accuracy on correct answers.
Phi seems specialized for MMLU-like problems and has an outsized score for a model its size, I would be surprised if it’s biased because of the format of the question. However, it’s possible using answers instead of letters would help improve raw accuracy in this case because the feature we used (45142) seems to max-activate on plain text and not multiple choice answers and it’s somewhat surprising it does this well on multiple-choice questions. For this reason, I think using text answers will boost the feature’s score but won’t change the relative ranking of features by accuracy. I don’t know how to adapt your proposed experiment for the task of finding how accurate a feature is at eliciting the model’s knowledge of correct answers.
This is a technical detail. We use a jax.lax.scan by the layer index and store layers in one structure with a special named axis. This mainly improves compilation time. Penzai has since implemented the technique in the main library.
Thanks for writing this up!
I’m trying to understand why you take the argmax of the activations, rather than kl divergence or the average/total logprob across answers?
Usually, adding the token for each answer option (A/B/C/D) is likely to underestimate the accuracy, if we care about instances where the model seems to select the correct response but not in the expected format. This happens more often in smaller models. With the example you gave, I’d still consider the following to be correct:
I might even accept this:
Here, even though the first token is B, that doesn’t mean the model selected option B. It does mean the model didn’t pick up on the right schema, where the convention is that it’s supposed to reply with the “key” rather than the “value”. Maybe (B is enough to deal with that.
Since you mention that Phi-3.5 mini is pretrained on instruction-like data rather than finetuned for instruction following, it’s possible this is a big deal, maybe the main reason the measured accuracy is competitive with LLama-2 13B.
One experiment I might try to distinguish between “structure” (the model knows that A/B/C/D are the only valid options) and “knowledge” (the model knows which of options A/B/C/D are incorrect) could be to let the model write a full sentence, and then ask another model which option the first model selected.
What’s the layer-scan transformation you used?
It doesn’t really make sense to interpret feature activation values as log probabilities. If we did, we’d have to worry about scaling. It’s also not guaranteed the score wouldn’t just decrease because of decreased accuracy on correct answers.
Phi seems specialized for MMLU-like problems and has an outsized score for a model its size, I would be surprised if it’s biased because of the format of the question. However, it’s possible using answers instead of letters would help improve raw accuracy in this case because the feature we used (45142) seems to max-activate on plain text and not multiple choice answers and it’s somewhat surprising it does this well on multiple-choice questions. For this reason, I think using text answers will boost the feature’s score but won’t change the relative ranking of features by accuracy. I don’t know how to adapt your proposed experiment for the task of finding how accurate a feature is at eliciting the model’s knowledge of correct answers.
This is a technical detail. We use a jax.lax.scan by the layer index and store layers in one structure with a special named axis. This mainly improves compilation time. Penzai has since implemented the technique in the main library.