Could you give an example of the sort of distinction you’re pointing at? Because I come to completely the opposite conclusion.
Part of my job is applied mathematics. I’d rather read a paper applying one technique to a variety of problems, than a paper applying a variety of techniques to one problem. Seeing the technique used on several problems lets me understand how and when to apply it. Seeing several techniques on the same problem tells me the best way to solve that particular problem, but I’ll probably never run into that particular problem in my work.
But that’s just me; presumably you want something else out of reading the literature. I would be interested to know what exactly.
I guess this perspective is informed by empirical ML / AI safety research. I don’t really do applied math.
For example: I considered writing a survey on sparse autoencoders a while ago. But the field changed very quickly and I now think they are probably not the right approach.
In some sense I think big, comprehensive survey papers on techniques / paradigms only make sense when you’ve solved the hard bottlenecks and there are many parallelizable incremental directions you can go in from there. E.g. once people figured out scaling pre-training for LLMs ‘just works’, it makes sense to write a survey about that + future opportunities.
Quite right. AI safety is moving very quickly and doesn’t have any methods that are well-understood enough to merit a survey article. Those are for things that have a large but scattered literature, with maybe a couple of dozen to a hundred papers that need surveying. That takes a few years to accumulate.
Could you give an example of the sort of distinction you’re pointing at? Because I come to completely the opposite conclusion.
Part of my job is applied mathematics. I’d rather read a paper applying one technique to a variety of problems, than a paper applying a variety of techniques to one problem. Seeing the technique used on several problems lets me understand how and when to apply it. Seeing several techniques on the same problem tells me the best way to solve that particular problem, but I’ll probably never run into that particular problem in my work.
But that’s just me; presumably you want something else out of reading the literature. I would be interested to know what exactly.
I guess this perspective is informed by empirical ML / AI safety research. I don’t really do applied math.
For example: I considered writing a survey on sparse autoencoders a while ago. But the field changed very quickly and I now think they are probably not the right approach.
In contrast, this paper from 2021 on open challenges in AI safety still holds up very well. https://arxiv.org/abs/2109.13916
In some sense I think big, comprehensive survey papers on techniques / paradigms only make sense when you’ve solved the hard bottlenecks and there are many parallelizable incremental directions you can go in from there. E.g. once people figured out scaling pre-training for LLMs ‘just works’, it makes sense to write a survey about that + future opportunities.
Quite right. AI safety is moving very quickly and doesn’t have any methods that are well-understood enough to merit a survey article. Those are for things that have a large but scattered literature, with maybe a couple of dozen to a hundred papers that need surveying. That takes a few years to accumulate.