I agree with you that it is obviously true that we won’t be able to make detailed predictions about what an AGI will do without running it. In other words, the most efficient source of information will be empiricism in the precise deployment environment. The AI safety plans that are likely to robustly help alignment research will be those that make empiricism less dangerous for AGI-scale models. Think BSL-4 labs for dangerous virology experiments, which would be analogous to airgapping, sandboxing, and other AI control methods.
I only agree with the first sentence here, and I don’t think the rest of the paragraph follows from it. I agree being able to safely experiment on AGIs would be useful, but it’s not a replacement for what interpretability is trying to do. Deception is a good example here: how do you empirically tell whether a model is deceptive without giving it a chance to actually execute a treacherous turn? You’d have to fool the model, and there are big obstacles to that. Maybe relaxed adversarial training could help, but that’s also more of a research direction than a concrete method for now—I think for any specific alignment approach, it’s easy to find challenges. If there is a specific problem that people are currently planning to solve with interpretability, and that you think could be better solved using some other method based on safely experimenting with the model, I’d be interested to hear that example, that seems more fruitful than abstract arguments. (Alternatively, you’d have to argue that interpretability is just entirely doomed and we should stop pursuing it even lacking better alternatives for now—I don’t think your arguments are strong enough for that.)
But as you said, this is an unrealistically optimistic picture.
I want to clarify that any story for solving deception (or similarly big obstacles) that’s as detailed as what I described seems unrealistically optimistic to me. Out of all stories this concrete that I can tell, the interpretability one actually looks like one of the more plausible ones to me.
In your model, why did the Human Brain Project crash and burn? Should we expect interpreting AGI-scale neural nets to succeed where interpreting biological brains failed?
This is actually something I’d be interested to read more about (e.g. I think a post looking at what lessons we can learn for interpretability from neuroscience and attempts to understand the brain could be great). I don’t know much about this myself, but some off-the-cuff thoughts:
I think mechanistic interpretability might turn out to be intractably hard in the near future, and I agree that understanding the brain being hard is some evidence for that
OTOH, there are some advantages for NN interpretability that feel pretty big to me: we can read of arbitrary weights and activations extremely cheaply at any time, we can get gradients of lots of different things, we can design networks/training procedures to make interpretability somewhat easier, we can watch how the network changes during its entire training, we can do stuff like train networks on toy tasks to create easier versions to study, and probably more I’m forgetting right now.
Your post briefly mentions these advantages but then dismisses them because they do “not seem to address the core issue of computational irreducibility”—as I said in my first comment, I don’t think computational irreducibility rules out the things people realistically want to get out of interpretability methods, which is why for now I’m not convinced we can draw extremely strong conclusions from neuroscience about the difficulty of interpretability.
ETA: so to answer you actual question about what I think happened with the HBP: in part they didn’t have those advantages (and without those, I do think mechanistic interpretability would be insanely difficult). Based on the Guardian post you linked, it also seems they may have been more ambitious than interpretability researchers? (i.e. actually making very fine-grained predictions)
I only agree with the first sentence here, and I don’t think the rest of the paragraph follows from it. I agree being able to safely experiment on AGIs would be useful, but it’s not a replacement for what interpretability is trying to do. Deception is a good example here: how do you empirically tell whether a model is deceptive without giving it a chance to actually execute a treacherous turn? You’d have to fool the model, and there are big obstacles to that. Maybe relaxed adversarial training could help, but that’s also more of a research direction than a concrete method for now—I think for any specific alignment approach, it’s easy to find challenges. If there is a specific problem that people are currently planning to solve with interpretability, and that you think could be better solved using some other method based on safely experimenting with the model, I’d be interested to hear that example, that seems more fruitful than abstract arguments. (Alternatively, you’d have to argue that interpretability is just entirely doomed and we should stop pursuing it even lacking better alternatives for now—I don’t think your arguments are strong enough for that.)
I want to clarify that any story for solving deception (or similarly big obstacles) that’s as detailed as what I described seems unrealistically optimistic to me. Out of all stories this concrete that I can tell, the interpretability one actually looks like one of the more plausible ones to me.
This is actually something I’d be interested to read more about (e.g. I think a post looking at what lessons we can learn for interpretability from neuroscience and attempts to understand the brain could be great). I don’t know much about this myself, but some off-the-cuff thoughts:
I think mechanistic interpretability might turn out to be intractably hard in the near future, and I agree that understanding the brain being hard is some evidence for that
OTOH, there are some advantages for NN interpretability that feel pretty big to me: we can read of arbitrary weights and activations extremely cheaply at any time, we can get gradients of lots of different things, we can design networks/training procedures to make interpretability somewhat easier, we can watch how the network changes during its entire training, we can do stuff like train networks on toy tasks to create easier versions to study, and probably more I’m forgetting right now.
Your post briefly mentions these advantages but then dismisses them because they do “not seem to address the core issue of computational irreducibility”—as I said in my first comment, I don’t think computational irreducibility rules out the things people realistically want to get out of interpretability methods, which is why for now I’m not convinced we can draw extremely strong conclusions from neuroscience about the difficulty of interpretability.
ETA: so to answer you actual question about what I think happened with the HBP: in part they didn’t have those advantages (and without those, I do think mechanistic interpretability would be insanely difficult). Based on the Guardian post you linked, it also seems they may have been more ambitious than interpretability researchers? (i.e. actually making very fine-grained predictions)