If you have <98% perf explained (on webtext relative to unigram or bigram baseline), then you degrade from GPT4 perf to GPT3.5 perf
Two quick thoughts on why this isn’t as concerning to me as this dialogue emphasized.
1. If we evaluate SAEs by the quality of their explanations on specific narrow tasks, full distribution performance doesn’t matter
2. Plausibly the safety relevant capabilities of GPT (N+1) are a phase change from GPT N, meaning much larger loss increases in GPT (N+1) when attaching SAEs are actually competitive with GPT N (ht Tom for this one)
On (1), I agree, if you could explain 80% of GPT-4 performance on a task and metric where GPT-3.5 performs 1⁄2 as well as GPT-4 than that would suffice for showing something interesting not in GPT-3.5. For instance, if an explanation was able to human interpretably explain 80% of GPT-4′s accuracy on solving APPS programing problems, then that accuracy would be higher than GPT-3.5.
However, I expect that performance on these sorts of tasks is pretty sensitive such that getting 80% of performance is much harder than getting 80% of loss recovered on web text. Most prior results look at explaning loss on webtext or a narrow distribution of webtext, not on trying to preserve downstream performance on some task.
There are some reasons why it could be easier to explain a high fraction of training compute in downstream task performance (e.g. it’s a task that humans can do as well as models), but also some annoyances related to only having a smaller amount of data.
I’m skeptical that (2) will qualitatively matter much, but I can see the intuition.
Two quick thoughts on why this isn’t as concerning to me as this dialogue emphasized.
1. If we evaluate SAEs by the quality of their explanations on specific narrow tasks, full distribution performance doesn’t matter
2. Plausibly the safety relevant capabilities of GPT (N+1) are a phase change from GPT N, meaning much larger loss increases in GPT (N+1) when attaching SAEs are actually competitive with GPT N (ht Tom for this one)
On (1), I agree, if you could explain 80% of GPT-4 performance on a task and metric where GPT-3.5 performs 1⁄2 as well as GPT-4 than that would suffice for showing something interesting not in GPT-3.5. For instance, if an explanation was able to human interpretably explain 80% of GPT-4′s accuracy on solving APPS programing problems, then that accuracy would be higher than GPT-3.5.
However, I expect that performance on these sorts of tasks is pretty sensitive such that getting 80% of performance is much harder than getting 80% of loss recovered on web text. Most prior results look at explaning loss on webtext or a narrow distribution of webtext, not on trying to preserve downstream performance on some task.
There are some reasons why it could be easier to explain a high fraction of training compute in downstream task performance (e.g. it’s a task that humans can do as well as models), but also some annoyances related to only having a smaller amount of data.
I’m skeptical that (2) will qualitatively matter much, but I can see the intuition.