Expanding on this now that I’ve a little more time:
Although I haven’t had a chance to perform due diligence on various aspects of this work, or the people doing it, or perform a deep dive comparing this work to the current state of the whole field or the most advanced work on LLM exploitation being done elsewhere,
My current sense is that this work indicates promising people doing promising things, in the sense that they aren’t just doing surface-level prompt engineering, but are using technical tools to find internal anomalies that correspond to interesting surface-level anomalies, maybe exploitable ones, and are then following up on the internal technical implications of what they find.
This looks to me like (at least the outer ring of) security mindset; they aren’t imagining how things will work well, they are figuring out how to break them and make them do much weirder things than their surface-apparent level of abnormality. We need a lot more people around here figuring out things will break. People who produce interesting new kinds of AI breakages should be cherished and cultivated as a priority higher than a fair number of other priorities.
In the narrow regard in which I’m able to assess this work, I rate it as scoring very high on an aspect that should relate to receiving future funding. If anyone else knows of a reason not to fund the researchers who did this, like a low score along some metric I didn’t examine, or because this is somehow less impressive as a feat of anomaly-finding than it looks, please contact me including via email or LW direct message; as otherwise I might run around scurrying trying to arrange funding for this if it’s not otherwise funded.
I’m confused: Wouldn’t we prefer to keep such findings private? (at least, keep them until OpenAI will say something like “this model is reliable/safe”?)
My guess: You’d reply that finding good talent is worth it?
I’m confused by your confusion. This seems much more alignment than capabilities; the capabilities are already published, so why not yay publishing how to break them?
I would not argue against this receiving funding. However, I would caution that, despite that I have not done research at this caliber myself and I should not be seen as saying I can do better at this time, it is a very early step of the research and I would hope to see significant movement towards higher complexity anomaly detection than mere token-level. I have no object-level objection to your perspective and I hope that followups gets funded and that researchers are only very gently encouraged to stay curious and not fall into a spotlight effect; I’d comment primarily about considerations if more researchers than OP are to zoom in on this. Like capabilities, alignment research progress seems to me that it should be at least exponential. Eg, prompt for passers by—as American Fuzzy Lop is to early fuzzers, what would the next version be to this article’s approach?
edit: I thought to check if exactly that had been done before, and it has!
The point of funding these individuals is that their mindset seems productive, not that this specific research is productive (even if it is). I think the theory is like
Although good ideas are understandably seductive, for early-stage investing they are mostly valuable as a way to identify good founders.
no, this was done through a mix of clustering and optimizing an input to get a specific output, not coverage guided fuzzing, which optimizes inputs to produce new behaviors according to a coverage measurement. but more generally, I’m proposing to compare generations of fuzzers and try to take inspiration from the ways fuzzers have changed since their inception. I’m not deeply familiar with those changes though—I’m proposing it would be an interesting source of inspiration but not that the trajectory should be copied exactly.
Expanding on this now that I’ve a little more time:
Although I haven’t had a chance to perform due diligence on various aspects of this work, or the people doing it, or perform a deep dive comparing this work to the current state of the whole field or the most advanced work on LLM exploitation being done elsewhere,
My current sense is that this work indicates promising people doing promising things, in the sense that they aren’t just doing surface-level prompt engineering, but are using technical tools to find internal anomalies that correspond to interesting surface-level anomalies, maybe exploitable ones, and are then following up on the internal technical implications of what they find.
This looks to me like (at least the outer ring of) security mindset; they aren’t imagining how things will work well, they are figuring out how to break them and make them do much weirder things than their surface-apparent level of abnormality. We need a lot more people around here figuring out things will break. People who produce interesting new kinds of AI breakages should be cherished and cultivated as a priority higher than a fair number of other priorities.
In the narrow regard in which I’m able to assess this work, I rate it as scoring very high on an aspect that should relate to receiving future funding. If anyone else knows of a reason not to fund the researchers who did this, like a low score along some metric I didn’t examine, or because this is somehow less impressive as a feat of anomaly-finding than it looks, please contact me including via email or LW direct message; as otherwise I might run around scurrying trying to arrange funding for this if it’s not otherwise funded.
I’m confused: Wouldn’t we prefer to keep such findings private? (at least, keep them until OpenAI will say something like “this model is reliable/safe”?)
My guess: You’d reply that finding good talent is worth it?
I’m confused by your confusion. This seems much more alignment than capabilities; the capabilities are already published, so why not yay publishing how to break them?
Because (I assume) once OpenAI[1] say “trust our models”, that’s the point when it would be useful to publish our breaks.
Breaks that weren’t published yet, so that OpenAI couldn’t patch them yet.
[unconfident; I can see counterarguments too]
Or maybe when the regulators or experts or the public opinion say “this model is trustworthy, don’t worry”
I would not argue against this receiving funding. However, I would caution that, despite that I have not done research at this caliber myself and I should not be seen as saying I can do better at this time, it is a very early step of the research and I would hope to see significant movement towards higher complexity anomaly detection than mere token-level. I have no object-level objection to your perspective and I hope that followups gets funded and that researchers are only very gently encouraged to stay curious and not fall into a spotlight effect; I’d comment primarily about considerations if more researchers than OP are to zoom in on this. Like capabilities, alignment research progress seems to me that it should be at least exponential. Eg, prompt for passers by—as American Fuzzy Lop is to early fuzzers, what would the next version be to this article’s approach?
edit: I thought to check if exactly that had been done before, and it has!
https://arxiv.org/abs/1807.10875
https://arxivxplorer.com/?query=https%3A%2F%2Farxiv.org%2Fabs%2F1807.10875
...
The point of funding these individuals is that their mindset seems productive, not that this specific research is productive (even if it is). I think the theory is like
https://blog.samaltman.com/how-to-invest-in-startups
yeah, makes sense. hopefully my comment was useless.
I could be mistaken, but I believe that’s roughly how OP said they found it.
no, this was done through a mix of clustering and optimizing an input to get a specific output, not coverage guided fuzzing, which optimizes inputs to produce new behaviors according to a coverage measurement. but more generally, I’m proposing to compare generations of fuzzers and try to take inspiration from the ways fuzzers have changed since their inception. I’m not deeply familiar with those changes though—I’m proposing it would be an interesting source of inspiration but not that the trajectory should be copied exactly.