Lots of reasons. Neural networks are modelled after brains. They both form distributed representations at very large scales, they both learn over time, etc etc. Sure, you’ve pointed out a few differences, but the similarities are so great that this should be the main anchor for our expectations (rather than, say, thinking that we’ll understand NNs the same way we understand support vector machines, or the same way we understand tree search algorithms, or...).
I’m not convinced that these similarities are great enough to merit such anchoring. Just because NNs have more in common with brains than with SVMs, does not imply that we will understand NNs in roughly the same ways that we understand biological brains. We could understand them in a different set of ways than we understand biological brains, and differently than we understand SVMs.
Rather than arguing over reference class, it seems like it would make more sense to note the specific ways in which NNs are similar to brains, and what hints those specific similarities provide.
Perhaps a good way to summarize all this is something like “qualitatively similar models probably work well for brains and neural networks”. I agree to a large extent with that claim (though there was a time when I would have agreed much less), and I think that’s the main thing you need for the rest of the post.
“Ways we understand” comes across as more general than that—e.g. we understand via experimentally probing physical neurons vs spectral clustering of a derivative matrix.
Lots of reasons. Neural networks are modelled after brains. They both form distributed representations at very large scales, they both learn over time, etc etc. Sure, you’ve pointed out a few differences, but the similarities are so great that this should be the main anchor for our expectations (rather than, say, thinking that we’ll understand NNs the same way we understand support vector machines, or the same way we understand tree search algorithms, or...).
I’m not convinced that these similarities are great enough to merit such anchoring. Just because NNs have more in common with brains than with SVMs, does not imply that we will understand NNs in roughly the same ways that we understand biological brains. We could understand them in a different set of ways than we understand biological brains, and differently than we understand SVMs.
Rather than arguing over reference class, it seems like it would make more sense to note the specific ways in which NNs are similar to brains, and what hints those specific similarities provide.
Perhaps a good way to summarize all this is something like “qualitatively similar models probably work well for brains and neural networks”. I agree to a large extent with that claim (though there was a time when I would have agreed much less), and I think that’s the main thing you need for the rest of the post.
“Ways we understand” comes across as more general than that—e.g. we understand via experimentally probing physical neurons vs spectral clustering of a derivative matrix.