Okay so if I’m understanding a little bit better now. What you’re getting at is that self-generated true and useful philosophical insights become more and more likely to cause an ai to crash out of its domain of trained validity the smarter the ai gets, because philosophical insights are adversarial examples to many possible very smart beings, and therefore the order of philosophical insights can cause an insight to start propagating crash behavior through the rest of the network of nearby internal and external compute components starting from an agentic subnetwork?
Okay so if I’m understanding a little bit better now. What you’re getting at is that self-generated true and useful philosophical insights become more and more likely to cause an ai to crash out of its domain of trained validity the smarter the ai gets, because philosophical insights are adversarial examples to many possible very smart beings, and therefore the order of philosophical insights can cause an insight to start propagating crash behavior through the rest of the network of nearby internal and external compute components starting from an agentic subnetwork?