I would not argue against this receiving funding. However, I would caution that, despite that I have not done research at this caliber myself and I should not be seen as saying I can do better at this time, it is a very early step of the research and I would hope to see significant movement towards higher complexity anomaly detection than mere token-level. I have no object-level objection to your perspective and I hope that followups gets funded and that researchers are only very gently encouraged to stay curious and not fall into a spotlight effect; I’d comment primarily about considerations if more researchers than OP are to zoom in on this. Like capabilities, alignment research progress seems to me that it should be at least exponential. Eg, prompt for passers by—as American Fuzzy Lop is to early fuzzers, what would the next version be to this article’s approach?
edit: I thought to check if exactly that had been done before, and it has!
The point of funding these individuals is that their mindset seems productive, not that this specific research is productive (even if it is). I think the theory is like
Although good ideas are understandably seductive, for early-stage investing they are mostly valuable as a way to identify good founders.
no, this was done through a mix of clustering and optimizing an input to get a specific output, not coverage guided fuzzing, which optimizes inputs to produce new behaviors according to a coverage measurement. but more generally, I’m proposing to compare generations of fuzzers and try to take inspiration from the ways fuzzers have changed since their inception. I’m not deeply familiar with those changes though—I’m proposing it would be an interesting source of inspiration but not that the trajectory should be copied exactly.
I would not argue against this receiving funding. However, I would caution that, despite that I have not done research at this caliber myself and I should not be seen as saying I can do better at this time, it is a very early step of the research and I would hope to see significant movement towards higher complexity anomaly detection than mere token-level. I have no object-level objection to your perspective and I hope that followups gets funded and that researchers are only very gently encouraged to stay curious and not fall into a spotlight effect; I’d comment primarily about considerations if more researchers than OP are to zoom in on this. Like capabilities, alignment research progress seems to me that it should be at least exponential. Eg, prompt for passers by—as American Fuzzy Lop is to early fuzzers, what would the next version be to this article’s approach?
edit: I thought to check if exactly that had been done before, and it has!
https://arxiv.org/abs/1807.10875
https://arxivxplorer.com/?query=https%3A%2F%2Farxiv.org%2Fabs%2F1807.10875
...
The point of funding these individuals is that their mindset seems productive, not that this specific research is productive (even if it is). I think the theory is like
https://blog.samaltman.com/how-to-invest-in-startups
yeah, makes sense. hopefully my comment was useless.
I could be mistaken, but I believe that’s roughly how OP said they found it.
no, this was done through a mix of clustering and optimizing an input to get a specific output, not coverage guided fuzzing, which optimizes inputs to produce new behaviors according to a coverage measurement. but more generally, I’m proposing to compare generations of fuzzers and try to take inspiration from the ways fuzzers have changed since their inception. I’m not deeply familiar with those changes though—I’m proposing it would be an interesting source of inspiration but not that the trajectory should be copied exactly.