But, of course, these two challenges were completely toy. Future challenges and benchmarks should not be.
I am confused. I imagine that there would still be uses for toy problems in future challenges and benchmarks. Of course, we don’t want to have exclusively toy problems, but I am reading this as advocating for the other extreme without providing adequate support for why, though I may have misunderstood. My defense of toy problems is that they are more broadly accessible, require less investment to iterate on, and allow us to isolate one specific portion of the difficulty, enabling progress to be made in one step, instead of needing to decompose and solve multiple subproblems. We can always discard those toy solutions that do not scale to larger models.
In particular, toy problems are especially suitable as a playground for novel approaches that are not yet mature. These usually are not initially performant enough to justify allocating substantial resources towards but may hold promise eventually once the kinks are ironed out. With a robust set of standard toy problems, we can determine which of these new procedures may be worth further investigation and refinement. This is especially important in a pre-paradigmatic field like mechanistic interpretability, where we may (as an analogy) be in a geocentric era waiting for heliocentrism to be invented.
Thoughts of mine on this are here. In short, I have argued that toy problems, cherry-picking models/tasks, and a lack of scalability has contributed to mechanistic interpretability being relatively unproductive.
I am confused. I imagine that there would still be uses for toy problems in future challenges and benchmarks. Of course, we don’t want to have exclusively toy problems, but I am reading this as advocating for the other extreme without providing adequate support for why, though I may have misunderstood. My defense of toy problems is that they are more broadly accessible, require less investment to iterate on, and allow us to isolate one specific portion of the difficulty, enabling progress to be made in one step, instead of needing to decompose and solve multiple subproblems. We can always discard those toy solutions that do not scale to larger models.
In particular, toy problems are especially suitable as a playground for novel approaches that are not yet mature. These usually are not initially performant enough to justify allocating substantial resources towards but may hold promise eventually once the kinks are ironed out. With a robust set of standard toy problems, we can determine which of these new procedures may be worth further investigation and refinement. This is especially important in a pre-paradigmatic field like mechanistic interpretability, where we may (as an analogy) be in a geocentric era waiting for heliocentrism to be invented.
Thoughts of mine on this are here. In short, I have argued that toy problems, cherry-picking models/tasks, and a lack of scalability has contributed to mechanistic interpretability being relatively unproductive.