My impression is that a “model” in this context is fundamentally about predicting the future, I don’t see how having the network answer “how many spaceships are on the screen right now?” would give us any indication about it having built something more complex than a simple pattern-recognizer.
Maybe a better way to detect an internal model is to search for subnetworks that are predicting the future activations of some network nodes. So for instance we might have a part of the network that is detecting the number of spaceships on the screen right now, which ought to be a simple pattern recognition subnetwork. Then, we search for a node in the network which predicts the future activation of the spaceship-recognition-node, computing the autocorrelation between the timeseries of the two nodes. If we find one, then this is strong evidence that the network is simulating the future in some way internally.
In fact we can use this to enforce model-building, we could divide the network into a pattern-recognition half, which would be left alone (not extra constraints put on it), and a modelling half, which would be forced to have its outputs predict the future values of the pattern-recognition half (again with multi-objective optimisation). Of course the pattern-recognition half could also take inputs from the modelling half.
My impression is that a “model” in this context is fundamentally about predicting the future, I don’t see how having the network answer “how many spaceships are on the screen right now?” would give us any indication about it having built something more complex than a simple pattern-recognizer.
Maybe a better way to detect an internal model is to search for subnetworks that are predicting the future activations of some network nodes. So for instance we might have a part of the network that is detecting the number of spaceships on the screen right now, which ought to be a simple pattern recognition subnetwork. Then, we search for a node in the network which predicts the future activation of the spaceship-recognition-node, computing the autocorrelation between the timeseries of the two nodes. If we find one, then this is strong evidence that the network is simulating the future in some way internally.
In fact we can use this to enforce model-building, we could divide the network into a pattern-recognition half, which would be left alone (not extra constraints put on it), and a modelling half, which would be forced to have its outputs predict the future values of the pattern-recognition half (again with multi-objective optimisation). Of course the pattern-recognition half could also take inputs from the modelling half.
Interesting idea. I might use that; thanks!