Do the scientists ever need to know how the game of life works, or can the heuristic arguments they find remain entirely opaque?
The scientists don’t start off knowing how the game of life works, but they do know how their model works.
The scientists don’t need to follow along with the heuristic argument, or do any ad hoc work to “understand” that argument. But they could look at the internals of the model and follow along with the heuristic argument if they wanted to, i.e. it’s important that their methods open up the model even if they never do.
Intuitively, the scientists are like us evaluating heuristic arguments about how activations evolve in a neural network without necessarily having any informal picture of how those activations correspond to the world.
where do they (the scientists) notice these fewer live cells? Do they have some deep interpretability technique for examining the generative model and “seeing” its grid of cells?
This was confusing shorthand.
They notice that the A-B correlation is stronger when the A and B sensors are relatively quiet. If there are other sensors, they also notice that the A-B pattern is more common when those other sensors are quiet.
That is, I expect they learn a notion of “proximity” amongst their sensors, and an abstraction of “how active” a region is, in order to explain the fact that active areas tend to persist over time and space and to be accompanied by more 1s on sensors + more variability on sensors. Then they notice that A-B correlations are more common when the area around A and B is relatively inactive.
But they can’t directly relate any of this to the actual presence of live cells. (Though they can ultimately use the same method described in this post to discover a heuristic argument explaining the same regularities they explain with their abstraction of “active,” and as a result they can e.g. distinguish the case where the zone including A and B is active (and so both of them tend to exhibit more 1s and more irregularity) from the case where there is a coincidentally high degree of irregularity in those sensors or independent pockets of activity around each of A and B.
The scientists don’t start off knowing how the game of life works, but they do know how their model works.
The scientists don’t need to follow along with the heuristic argument, or do any ad hoc work to “understand” that argument. But they could look at the internals of the model and follow along with the heuristic argument if they wanted to, i.e. it’s important that their methods open up the model even if they never do.
Intuitively, the scientists are like us evaluating heuristic arguments about how activations evolve in a neural network without necessarily having any informal picture of how those activations correspond to the world.
This was confusing shorthand.
They notice that the A-B correlation is stronger when the A and B sensors are relatively quiet. If there are other sensors, they also notice that the A-B pattern is more common when those other sensors are quiet.
That is, I expect they learn a notion of “proximity” amongst their sensors, and an abstraction of “how active” a region is, in order to explain the fact that active areas tend to persist over time and space and to be accompanied by more 1s on sensors + more variability on sensors. Then they notice that A-B correlations are more common when the area around A and B is relatively inactive.
But they can’t directly relate any of this to the actual presence of live cells. (Though they can ultimately use the same method described in this post to discover a heuristic argument explaining the same regularities they explain with their abstraction of “active,” and as a result they can e.g. distinguish the case where the zone including A and B is active (and so both of them tend to exhibit more 1s and more irregularity) from the case where there is a coincidentally high degree of irregularity in those sensors or independent pockets of activity around each of A and B.