Useless: Most work which doesn’t backchain from a path to ending the acute risk period, bearing in mind that most of the acute risk comes not from LLMs without strong guardrails, but from agentic goal-maximising AIs. Sometimes people will do things which are useful for this anyway without tracking this crucial consideration, but the hit-rate is going to be low.
Counterproductive: Work which transfers over to productization and brings more funding and more attention to AI capabilities, especially that which brings dangerously good automated coding and automated research closer. I’d put a good deal of interpretability in this category, being able to open the black box makes it way easier to figure out ways to improve algorithmic efficiency. Interp could be part of a winning play by an actor who is aware of the broader strategic landscape, but I expect broadcasting it is net negative. Nate’s post here is pretty good: If interpretability research goes well, it may get dangerous
The main criteria I think is not broadcasting it to organizations without a plan for aligning strong superintelligence that has a chance of working. This probably means not publishing, and also means being at an employer who has a workable plan.
There might be some types of interp which don’t have much capabilities potential, and are therefore safe to publish widely. Maybe some of the work focused specifically on detecting deception? But mostly I expect interp to be good only when part of a wider plan with a specific backchained way to end the acute risk period, which might take advantage of the capabilities boosts offered. My steelman of Anthropic is trying to pull something like this off, though they’re pretty careful to avoid leaking details of what their wider plan is if they have one.
What kind of work do you think of?
Useless: Most work which doesn’t backchain from a path to ending the acute risk period, bearing in mind that most of the acute risk comes not from LLMs without strong guardrails, but from agentic goal-maximising AIs. Sometimes people will do things which are useful for this anyway without tracking this crucial consideration, but the hit-rate is going to be low.
Counterproductive: Work which transfers over to productization and brings more funding and more attention to AI capabilities, especially that which brings dangerously good automated coding and automated research closer. I’d put a good deal of interpretability in this category, being able to open the black box makes it way easier to figure out ways to improve algorithmic efficiency. Interp could be part of a winning play by an actor who is aware of the broader strategic landscape, but I expect broadcasting it is net negative. Nate’s post here is pretty good: If interpretability research goes well, it may get dangerous
What kind of interpretability work do you consider plausibly useful or at least not counterproductive?
The main criteria I think is not broadcasting it to organizations without a plan for aligning strong superintelligence that has a chance of working. This probably means not publishing, and also means being at an employer who has a workable plan.
There might be some types of interp which don’t have much capabilities potential, and are therefore safe to publish widely. Maybe some of the work focused specifically on detecting deception? But mostly I expect interp to be good only when part of a wider plan with a specific backchained way to end the acute risk period, which might take advantage of the capabilities boosts offered. My steelman of Anthropic is trying to pull something like this off, though they’re pretty careful to avoid leaking details of what their wider plan is if they have one.