Deriving the laws of physics from an image is completely impossible because the data contained in the image is a tiny piece of the actual system, which is image data + observer.
Specifically, there is no way to get from “these bits represent 3 related values, in a grid” to “these values represent different intensities of specific wavelengths of perpendicular electric and magnetic fields propagating through space”
Even if that hypothesis could be generated, there is no data about which wavelengths are represented. Nor any information to derive how the light detected by rods and cones gets processed to generate recognition of specific physical things. Nothing in an image says that equal R and G signals means Yellow (or whatever the actual rules are)
Deriving the laws of physics from an image is completely impossible because the data contained in the image is a tiny piece of the actual system, which is image data + observer. Specifically, there is no way to get from “these bits represent 3 related values, in a grid” to “these values represent different intensities of specific wavelengths of perpendicular electric and magnetic fields propagating through space” Even if that hypothesis could be generated, there is no data about which wavelengths are represented. Nor any information to derive how the light detected by rods and cones gets processed to generate recognition of specific physical things. Nothing in an image says that equal R and G signals means Yellow (or whatever the actual rules are)