Nice post, very much the type of work I’d like to see more of.
Thank you!
I’m not sure I’d describe this work as “notorious”, even if some have reservations about it.
Oops, terrible word choice on my part. I edited the article to say “gained attention” rather than “gained notoriety”.
I think this is incorrect—for example, “biological systems are highly modular, at multiple different scales”. And I expect deep learning to construct minds which are also fairly modular. That also allows search to be more useful, because it can make changes which are comparatively isolated.
Yes I agree with this, but modularity is only a part of what is needed for comprehensibility. Chris Olah’s work on circuits in convnets suggests that convnets trained on image recognition tasks are somewhat modular, but it’s still very very difficult to tease them apart and understand them. Biological trees are modular in many ways, but we’re still working on understanding how trees work after many centuries of investigation.
You might say that comprehensibility = modularity + stories. You need artifacts that decompose into subsystems, and you need stories about that decomposition and what the pieces do so that you’re not left figuring it out from scratch.
Thank you!
Oops, terrible word choice on my part. I edited the article to say “gained attention” rather than “gained notoriety”.
Yes I agree with this, but modularity is only a part of what is needed for comprehensibility. Chris Olah’s work on circuits in convnets suggests that convnets trained on image recognition tasks are somewhat modular, but it’s still very very difficult to tease them apart and understand them. Biological trees are modular in many ways, but we’re still working on understanding how trees work after many centuries of investigation.
You might say that comprehensibility = modularity + stories. You need artifacts that decompose into subsystems, and you need stories about that decomposition and what the pieces do so that you’re not left figuring it out from scratch.