Thanks very much for the comments I think you’ve asked a bunch of very good questions. I’ll try to give some thoughts:
Deep learning as a field isn’t exactly known for its rigor. I don’t know of any rigorous theory that isn’t as you say purely ‘reactive’, with none of it leading to any significant ‘real world’ results. As far as I can tell this isn’t for a lack of trying either. This has made me doubt its mathematical tractability, whether it’s because our current mathematical understanding is lacking or something else (DL not being as ‘reductionist’ as other fields?). How do you lean in this regard? You mentioned that you’re not sure when it comes to how amenable interpretability itself is, but would you guess that it’s more or less amenable than deep learning as a whole?
I think I kind of share your general concern here and I’m uncertain about it. I kind of agree that it seems like people had been trying for a while to figure out the right way to think about deep learning mathematically and that for a while it seemed like there wasn’t much progress. But I mean it when I say these things can be slow. And I think that the situation is developing and has changed—perhaps significantly—in the last ~5 years or so, with things like the neural tangent kernel, the Principles of Deep Learning Theory results and increasingly high-quality work on toy models. (And even when work looks promising, it may still take a while longer for the cycle to complete and for us to get ‘real world’ results back out of these mathematical points of view, but I have more hope than I did a few years ago). My current opinion is that certain aspects of interpretability will be more amenable to mathematics than understanding DNN-based AI as a whole .
How would success of this relate to capabilities research? It’s a general criticism of interpretability research that it also leads to heightened capabilities, would this fare better/worse in that regard? I would have assumed that a developed rigorous theory of interpretability would probably also entail significant development of a rigorous theory of deep learning.
I think basically your worries are sound. If what one is doing is something like ‘technical work aimed at understanding how NNs work’ then I don’t see there as being much distinction between capabilities and alignment ; you are really generating insights that can be applied in many ways, some good some bad (and part of my point is you have to be allowed to follow your instincts as a scientist/mathematician in order to find the right questions). But I do think that given how slow and academic the kind of work I’m talking about is, it’s neglected by both a) short timelines-focussed alignment people and b) capabilities people.
How likely is it that the direction one may proceed in would be correct? You mention an example in mathematical physics, but note that it’s perhaps relatively unimportant that this work was done for ‘pure’ reasons. This is surprising to me, as I thought that a major motivation for pure math research, like other blue sky research, is that it’s often not apparent whether something will be useful until it’s well developed. I think this is the similar to you mentioning that the small scale problems will not like the larger problem. You mention that this involves following one’s nose mathematically, do you think this is possible in general or only for this case? If it’s the latter, why do you think interpretability is specifically amenable to it?
Hmm, that’s interesting. I’m not sure I can say how likely it is one would go in the correct direction. But in my experience the idea that ‘possible future applications’ is one of the motivations for mathematicians to do ‘blue sky’ research is basically not quite right. I think the key point is that the things mathematicians end up chasing for ‘pure’ math/aesthetic reasons seem to be oddly and remarkably relevant when we try to describe natural phenomena (iirc this is basically a key point in Wigner’s famous ‘Unreasonable Effectiveness’ essay.) So I think my answer to your question is that this seems to be something that happens “in general” or at least does happen in various different places across science/applied math
Thanks very much for the comments I think you’ve asked a bunch of very good questions. I’ll try to give some thoughts:
I think I kind of share your general concern here and I’m uncertain about it. I kind of agree that it seems like people had been trying for a while to figure out the right way to think about deep learning mathematically and that for a while it seemed like there wasn’t much progress. But I mean it when I say these things can be slow. And I think that the situation is developing and has changed—perhaps significantly—in the last ~5 years or so, with things like the neural tangent kernel, the Principles of Deep Learning Theory results and increasingly high-quality work on toy models. (And even when work looks promising, it may still take a while longer for the cycle to complete and for us to get ‘real world’ results back out of these mathematical points of view, but I have more hope than I did a few years ago). My current opinion is that certain aspects of interpretability will be more amenable to mathematics than understanding DNN-based AI as a whole .
I think basically your worries are sound. If what one is doing is something like ‘technical work aimed at understanding how NNs work’ then I don’t see there as being much distinction between capabilities and alignment ; you are really generating insights that can be applied in many ways, some good some bad (and part of my point is you have to be allowed to follow your instincts as a scientist/mathematician in order to find the right questions). But I do think that given how slow and academic the kind of work I’m talking about is, it’s neglected by both a) short timelines-focussed alignment people and b) capabilities people.
Hmm, that’s interesting. I’m not sure I can say how likely it is one would go in the correct direction. But in my experience the idea that ‘possible future applications’ is one of the motivations for mathematicians to do ‘blue sky’ research is basically not quite right. I think the key point is that the things mathematicians end up chasing for ‘pure’ math/aesthetic reasons seem to be oddly and remarkably relevant when we try to describe natural phenomena (iirc this is basically a key point in Wigner’s famous ‘Unreasonable Effectiveness’ essay.) So I think my answer to your question is that this seems to be something that happens “in general” or at least does happen in various different places across science/applied math