Have you seen “Growing Neural Cellular Automata?” It seems like the authors there are trying to do something pretty similar to what you have in mind here.
Yes—I found that work totally wild. Yes they are setting up a cellular automata in such a way that it evolves towards and then fixates at a target state, but iirc what they are optimizing over is the rules of the automata itself, rather than over a construction within the automata.
Wow, that’s cool! Any idea how complex (how large the filesize) the learned CA’s rules were? I wonder how it compares to the filesize of the target image. Many order of magnitude bigger? Just one? Could it even be… smaller?
Yeah I had the sense that the project could have been intended as a compression mechanism since compressing in terms of CA rules kind of captures the spatial nature of image information quite well.
I wonder if there are some sorts of images that are really hard to compress via this particular method.
I wonder if you can achieve massive reliable compression if you aren’t trying to target a specific image but rather something in a general category. For example, maybe this specific lizard image requires a CA rule filesize larger than the image to express, but in the space of all possible lizard images there are some nice looking lizards that are super compressible via this CA method. Perhaps using something like DALL-E we could search this space efficiently and find such an image.
Have you seen “Growing Neural Cellular Automata?” It seems like the authors there are trying to do something pretty similar to what you have in mind here.
Yes—I found that work totally wild. Yes they are setting up a cellular automata in such a way that it evolves towards and then fixates at a target state, but iirc what they are optimizing over is the rules of the automata itself, rather than over a construction within the automata.
Wow, that’s cool! Any idea how complex (how large the filesize) the learned CA’s rules were? I wonder how it compares to the filesize of the target image. Many order of magnitude bigger? Just one? Could it even be… smaller?
Yeah I had the sense that the project could have been intended as a compression mechanism since compressing in terms of CA rules kind of captures the spatial nature of image information quite well.
I wonder if there are some sorts of images that are really hard to compress via this particular method.
I wonder if you can achieve massive reliable compression if you aren’t trying to target a specific image but rather something in a general category. For example, maybe this specific lizard image requires a CA rule filesize larger than the image to express, but in the space of all possible lizard images there are some nice looking lizards that are super compressible via this CA method. Perhaps using something like DALL-E we could search this space efficiently and find such an image.