you can’t conclude that anything similar holds for the octopus behaviour until you actually observe it
… or unless you derive strong mathematical proofs about which of their features agentic systems will preserve under self-modification, and design your system such that it approximates these idealized agents and the octopus behavior counts as one of the preserved features.
If you ~randomly sample superintelligent entities from a wide distribution meeting some desiderata, as the modern DL is doing, then yeah, there are no such guarantees. But that’s surely not the only way to design minds (much like “train an NN to do modular addition” is not the only way to write a modular-addition algorithm), and in the context of a movie, we can charitably assume that the AI was built using one of the more tractable avenues.
… or unless you derive strong mathematical proofs about which of their features agentic systems will preserve under self-modification, and design your system such that it approximates these idealized agents and the octopus behavior counts as one of the preserved features.
If you ~randomly sample superintelligent entities from a wide distribution meeting some desiderata, as the modern DL is doing, then yeah, there are no such guarantees. But that’s surely not the only way to design minds (much like “train an NN to do modular addition” is not the only way to write a modular-addition algorithm), and in the context of a movie, we can charitably assume that the AI was built using one of the more tractable avenues.