If perplexity on a task is gradually decreasing then I think that’s probably produced some underlying gradual change in the model (which may be the sum of a ton of tiny discrete changes).
If accuracy and log loss are both improving, I think that’s most likely due to the same underlying phenomenon. That’s not nearly as obvious—it could be that there are two separate phenomena, and one gives rise to gradual improvements in perplexity without affecting accuracy while the other gives rise to abrupt improvements in accuracy without reflecting perplexity—but it still seems like a very natural guess.
The induction bump in particular seems to involve accuracy and log loss improving together, unsurprisingly.
Of course the induction behavior is just one small driver of log loss and so it corresponds to a small blip on the loss or accuracy curves overall, while corresponding to a big jump on some subtasks. In a larger model there are likely to be many events like this that don’t correspond to any blip at all in the overall loss curve while being important for a subtask. This seems unlikely to be the driver of the difference for the BIG bench tasks under discussion, since the continuous log probability improvements and discontinuous accuracy improvements are being measured on the same distribution.
In the case of parities, I think there is a smooth underlying change in the model, e.g. see figure 3 in this paper. I agree that (i) such changes are not always visible in perplexity, e.g. for parities, and therefore it’s not obvious that you will know where to look for them even if they exist, (ii) it’s not obvious whether they always exist, we just know about a few cases we’ve studied like parities and grokking.
If perplexity on a task is gradually decreasing then I think that’s probably produced some underlying gradual change in the model (which may be the sum of a ton of tiny discrete changes).
If accuracy and log loss are both improving, I think that’s most likely due to the same underlying phenomenon. That’s not nearly as obvious—it could be that there are two separate phenomena, and one gives rise to gradual improvements in perplexity without affecting accuracy while the other gives rise to abrupt improvements in accuracy without reflecting perplexity—but it still seems like a very natural guess.
The induction bump in particular seems to involve accuracy and log loss improving together, unsurprisingly.
Of course the induction behavior is just one small driver of log loss and so it corresponds to a small blip on the loss or accuracy curves overall, while corresponding to a big jump on some subtasks. In a larger model there are likely to be many events like this that don’t correspond to any blip at all in the overall loss curve while being important for a subtask. This seems unlikely to be the driver of the difference for the BIG bench tasks under discussion, since the continuous log probability improvements and discontinuous accuracy improvements are being measured on the same distribution.
In the case of parities, I think there is a smooth underlying change in the model, e.g. see figure 3 in this paper. I agree that (i) such changes are not always visible in perplexity, e.g. for parities, and therefore it’s not obvious that you will know where to look for them even if they exist, (ii) it’s not obvious whether they always exist, we just know about a few cases we’ve studied like parities and grokking.