I don’t know what the procedure for this is, but it occurs to me that if we can specify information about an environment via differential equations inside the neural network, then we can also compare this network’s output to one that doesn’t have the same information.
In the name of learning more about how to interpret the models, we could try something like:
1) Construct an artificial environment which we can completely specify via a set of differential equations.
2) Run a neural network to learn that environment with every combination of those differential equations.
3) Compare all of these to several control cases of not providing any differential equations.
It seems like how the control case differs from each of the cases-with-structural-information should give us some information about how the network learns the environmental structure.
I don’t know what the procedure for this is, but it occurs to me that if we can specify information about an environment via differential equations inside the neural network, then we can also compare this network’s output to one that doesn’t have the same information.
In the name of learning more about how to interpret the models, we could try something like:
1) Construct an artificial environment which we can completely specify via a set of differential equations.
2) Run a neural network to learn that environment with every combination of those differential equations.
3) Compare all of these to several control cases of not providing any differential equations.
It seems like how the control case differs from each of the cases-with-structural-information should give us some information about how the network learns the environmental structure.