Suppose that I just specify a generic feature of a simulation that can support life + expansion (the complexity of specifying “a simulation that can support life” is also paid by the intended hypothesis, so we can factor it out). Over a long enough time such a simulation will produce life, that life will spread throughout the simulation, and eventually have some control over many features of that simulation.
Oh yes, I see. That does cut the complexity overhead down a lot.
Once you’ve specified the agent, it just samples randomly from the distribution of “strings I want to influence.” That has a way lower probability than the “natural” complexity of a string I want to influence. For example, if 1/quadrillion strings are important to influence, then the attackers are able to save log(quadrillion) bits.
Oh yes, I see. That does cut the complexity overhead down a lot.
I don’t understand what you’re saying here.