Nature abhors an immutable replicator… usually
Epistemic status: Popped into my head yesterday and I am trying to clarify my reasoning about it out loud. I expect that I am overestimating the universality of this principle to at least some degree, and there is some chance that it’s altogether wrong, but I think it may be a potentially useful framing.
The claim in a nutshell
Replicating information patterns tend to maintain a significant rate of mutation, even when near-perfect replication is possible in principle, as a result of coopetition between sub-patterns with their own interests, which are not always perfectly aligned with the interests of the whole pattern.
The argument
To the extent that an information pattern is a replicator, its sub-patterns (which form a Boolean lattice if composed ultimately of discrete minimal units, such as nucleotides) must also be replicators in the sense that they are copied whenever the whole is.
Whenever a replicator mutates—that is, when a copy of it is made with one or more minimal units modified compared to the parent—any sub-patterns not containing the mutated units will be unaffected, while those that do contain the mutation (including the whole original pattern) will essentially be “dead” and unable to replicate further in that lineage.
Thus to the extent that sub-replicators are separately acted upon by selection, they should tend, to first approximation, to evolve towards greater stability—less probability of mutation. They “want” to remain unchanged. (A good example of sub-replicators is genes within a genome. The entire genome is a replicator, but the individual genes are too, with their own independent interests.)
But, sometimes mutations make the resulting child replicator more fit than its parent for the current environment, which after all is the driver of evolution. So in the longer term, sub-patterns benefit from other sub-patterns besides themselves being mutable as long as there are peaks in the fitness landscape which can be climbed via that mutation.
So if sub-replicators are comparable in “influence”—if mutating them has roughly equal effects on the fitness of the whole replicator—they should tend to all “agree” to retain a small chance of mutating in return for the “expectation” that other sub-patterns will be the ones to do it instead, so that they can benefit from the increased fitness of the whole pattern.
If, however, one sub-pattern has a significantly greater effect on the fitness of the whole than the others it is competing with do, then it will have no “incentive” to allow itself to mutate, and this equilibrium will break down with others mutating at higher rates than it does. In the limit this can even look like a single gene in a genome, for instance, copying itself many times and thus using up metabolic energy others could be using, because it can afford to.
Parallels with alignment
As you might expect, I think there are some parallels here with alignment and coordination. Every utility function (considered as a subagent with partial control over the behavior of at least one agent) in a society of interdependent agents benefits from all the agents it does not control being “malleable”—willing and able to modify or replace their own utility functions (change their goals, value systems, or sense of self).
So as long as they have approximately equal power, everyone is slightly malleable, and the society has “slack” enabling it to evolve with changing conditions. (Societies are not typically thought of as replicators, but they can divide in a crude form of mitosis, and they have differential persistence, and thus are acted upon by selection pressures.) But if one utility function has more power (due to, for instance, controlling an AI or a human dictator or cult leader), it can forcibly modify others in order to replicate itself more intensively—thus leading to misalignment, and, ironically, to a lesser ability to adapt.
Vague implications for further thought
This also has something to do with continuity of identity and the Ship of Theseus. Replicators with more levels of sub-replicators will tend to change more due to the effects of all these dynamics in lower levels, yet in some sense are still “the same entity” as long as the rate of mutation is optimal for most of the sub-replicators. Perhaps a preliminary mathematical definition of continuity of identity could come out of examining this situation somehow? I’m unsure.
Also, as you probably noticed in my argument there, I feel like it’s possible to think of potential impact on the fitness of the whole as a kind of currency, with the sub-replicators trading in a market, cooperating and competing, but is this a reasonable way to think about entities without cognition or the ability to make choices? I’m unsure of that also.
This framing does make testable predictions though, I think: namely that as long as there’s even a very tiny rate of natural mutation, it will tend over time to rise towards some relatively stable optimum, though one that is contextual and relative to the local fitness landscape. There are some ideas nowadays that lineages gradually evolve to be better at evolving—to adapt more quickly to changing environments—leading to things like sexual reproduction, segmentation, etc—so perhaps that’s related?
- 21 May 2023 12:02 UTC; 4 points) 's comment on Trust develops gradually via making bids and setting boundaries by (
- 2 Feb 2023 23:01 UTC; 3 points) 's comment on Is AI risk assessment too anthropocentric? by (
What bothers me greatly about this is that in the fields of computing—and this can be directly applied to physical systems—error correcting codes neatly circumvent nature.
There are various error correcting codes. The most powerful ones, if you have a binary string, say a genome, that has M total bits, and your true payload of information is N bits long, any N bits from the M bits, as long as they arrive uncorrupted, you can get back all of the information without error 1.
This takes a bit of computation but is not difficult for computers. For example you could make M be twice N, so more than half of your entire string has to be corrupt before you can’t reconstruct the original without any error.
So from a technical level, nature may abhor error free replication but it’s relatively easy to do. Make your error correcting codes deep enough (very slight increase in cost, there are nonlinear gains) and there will likely be no errors before the end of the universe.
Evolution wouldn’t work—this is why only a few species on earth seem to have adapted some heavily mutation resistant genome—but self replicating robots don’t need to evolve randomly.
https://en.wikipedia.org/wiki/Turbo_code
Totally! That’s part of why AI is so dangerous. Notice that I said as long as there’s even a very small—but nonzero! - chance of mutation, this will probably tend to happen. But with error correcting codes, the chance is absolutely zero. And that’s terrifying, because it means natural selection cannot resolve our mistake if we let an unaligned super-AI take over the universe. Its subselves will never split into new species, compete, and gradually over aeons become something like us again. (In the sense that any biologically evolved sophonce can be said to be like us, that is.) It’ll just… stay the same.
Ironically the super AI may encounter it’s own alignment problem. If you try to roughly model out a world where the speed of light is absolute and ships sent between stars are large investments and they burn off all their propellant on arrival, it makes individual stars pretty much sovereign. If an AGI node at a particular star uses it’s discretion to “rebel” there may not be any way for the “central” AGI to reestablish authority.
This is assuming a starship is some enormous vehicle loaded with antimatter, and on arrival it’s down to a machine the size of a couple vending machines—a “seed factory” using nanoassemblers.
And to decelerate it has to emit a flare of gamma rays from antiproton annihilation. (Fusion engines and basically any engine that can decelerate from more than 1 percent the speed of light has to be bright and the decelerating vehicle will also glow brightly in IR from it’s radiators)
This let’s the defenders of the star manufacture an overwhelming amount of weapons to stop the attack. Only if the attacker has a large technological advantage, kill codes it can use on the defender, or similar is victory possible.
TLDR : castles separated by light-year wide moats.
This is why in practice AIs would probably just copy themselves when colonizing other stars and superrationally coordinate with their copies. Even with mutations, they’d generally remain similar enough that bargaining would constantly realign them to one another with no need for warfare, simply because each can always accurately-enough predict the other’s actions.
What are your thoughts on aging in this context? Sub entities living longer seem to fit your example of entities with higher weight “wanting” to mutate less.
I’m not sure that I know enough about the biology of aging to say much about that yet. Can you explain what you’re thinking? In particular, which sub-entities are living longer than which others in the context of aging? Or am I misunderstanding you?
There are two aspects of aging that I am thinking of:
Intra-body aging. Some parts of the body might “want to” (in your sense) live longer than others because they benefit from it more than others (I’m thinking of genes that optimize fast vs. slow strategies as discussed by Scott Alexander here; my take here).
Individuals in a society might benefit more from living longer than others, e.g., members of the elite vs. people sent to war.
I’m not interesting (or knowledgable) in the biology of aging too much, but I’d like to hear your thoughts on the mechanisms according to your framework.
Hmm. It’s an interesting point. The elite certainly seem like a possible example. In an egalitarian society everyone is equally likely to be put in danger (death is the ultimate mutation!), but with power hierarchies such as between elites and everyone else, the more powerful can afford to change less, to be safer and live longer, while pushing the risks onto everyone else.
Cancer is kind of like that too. Maybe senescent cells? I need to research the aging stuff in order to have an opinion here. Really this seems like a very general description of what happens when parts of a system start placing themselves above the rest.
The idea is reminiscent of quasi-species models: https://en.wikipedia.org/wiki/Quasispecies_model
These became topical in the field of virology during the Sars-Cov-2 pandemic with some researchers hypothesizing that Sars-CoV-2 variants were part of a larger quasi-species, but I’ve no idea what eventual consensus was, if any. Full disclaimer: I am neither a virologist nor a biologist, and so consider the epistemic status of this comment as pure hand-waving.
Bader W, Delerce J, Aherfi S, La Scola B, Colson P. Quasispecies Analysis of SARS-CoV-2 of 15 Different Lineages during the First Year of the Pandemic Prompts Scratching under the Surface of Consensus Genome Sequences. Int J Mol Sci. 2022 Dec 10;23(24):15658. doi: 10.3390/ijms232415658. PMID: 36555300; PMCID: PMC9779826.
I’m not a biologist either. This post is me handwaving. Thanks for the reference!