That’s a very interesting observation. As far as I understand as well, deep neural networks have completely unlimited rewirability—a particular “function” can exist anywhere in the network, in multiple places, or spread out between and within layers. It can be duplicated in multiple places. And if you retrain that same network, it will then be found in another place in another form. It makes it seem like you need something like a CNN to be able to successfully identify functional groups within another model, if it’s even possible.
That’s a very interesting observation. As far as I understand as well, deep neural networks have completely unlimited rewirability—a particular “function” can exist anywhere in the network, in multiple places, or spread out between and within layers. It can be duplicated in multiple places. And if you retrain that same network, it will then be found in another place in another form. It makes it seem like you need something like a CNN to be able to successfully identify functional groups within another model, if it’s even possible.