If the human knows the logic of the random number generator that was used to initialize the parameters of the original network, they have no problem to manually run the same logic themselves.
Presumably the random seed is going to be big and complicated.
It also wouldn’t just be the random seed you’d need to memorize. You’d have to memorize all the training data and labels too. The scenario I gave was intended to be one where you were asked to study an algorithm for a while, and then afterwards operate it on a blank sheet of paper, given testing data. Even if you had the initialization seed, you wouldn’t just be able to spin up neural networks to make predictions...
Presumably the random seed is going to be big and complicated.
It also wouldn’t just be the random seed you’d need to memorize. You’d have to memorize all the training data and labels too. The scenario I gave was intended to be one where you were asked to study an algorithm for a while, and then afterwards operate it on a blank sheet of paper, given testing data. Even if you had the initialization seed, you wouldn’t just be able to spin up neural networks to make predictions...
(just noting that same goes for a decision tree that isn’t small enough s.t. the human can memorize it)
Yes, exactly. That’s why the authors of tree regularization put large penalties for large trees.