Where was the optimization pressure for better designs supposed to have arisen in the “communal” phase?
Thus, we may speculate that the emergence of life should best be viewed in three phases, distinguished by the nature of their evolutionary dynamics. In the first phase, treated in the present article, life was very robust to ambiguity, but there was no fully unified innovation-sharing protocol. The ambiguity in this stage led inexorably to a dynamic from which a universal and optimized innovation-sharing protocol emerged, through a cooperative mechanism. In the second phase, the community rapidly developed complexity through the frictionless exchange of novelty enabled by the genetic code, a dynamic we recognize to be patently Lamarckian (19). With the increasing level of complexity there arose necessarily a lower tolerance of ambiguity, leading finally to a transition to a state wherein communal dynamics had to be suppressed and refinement superseded innovation. This Darwinian transition led to the third phase, which was dominated by vertical descent and characterized by the slow and tempered accumulation of complexity.
They claim that universal horizontal gene transfer (HGT) arose through a “cooperative” mechanism, without saying what that would have looked like at the level of cells, or at the level of some kind of soupy boundary-free chemostat, or something?
Like: where did the new “sloppy but still useful” alleles come from? Why (and how) would any local part of “the communal system” spend energy to generate such things or pass them along starting literally from scratch? This sort of meta-evolutionary cleverness usually requires a long time to arise!
The thing that should be possible (not easy, but possible) only now, with technology, is to invent some new amino acids (leveraging what exists in the biosphere now instead of what was randomly available billions of years ago) AND a new codon system for them, and to boot strap from there, via directed evolution, towards some kind of cohesively viable neolife that (if it turns out to locate a better local optimum than we did) might voraciously consume the current ecology.
The above image is from Figure 2, with the description “Examples of genetically encoded noncanonical amino acids with novel functions” from Schulze’s “Expanding the genetic code”.
Compensatory mutation is actually a pretty interesting and key concept, because it suggests a method by which one might prevent a gray good scenario on purpose, rather than via “mere finger crossing” regarding limitations that we might pray that we will be lucky enough for it to run into by accident.
We could run evolution in silico at the protein level, with large conceptual jumps, and then print something unlikely to be able to evolve.
The “unable to easily evolve” thing might be similar to human telomeres, but more robust.
It could make every generation of neolife to almost entirely a degeneration way from “edenic neolife” and towards a mutational meltdown.
Note that this is essentially an “alignment” or “corrigibility” strategy, but at the level of chemistry and molecular biology, where the hardware is much much easier to reason about, in comparison to the “software” of “planning and optimization processes themselves”.
If you could cause there to be only a 1 in septillion chance of positive or compensatory mutations on purpose (knowing the mechanisms and math to calculate this risk) and put several fully independent booby traps into the system that will fail after a handful of mutations, then you could have the first X generations “eat and double very very efficiently” and then have the colony switch to “doing the task” for Y generations, and then maybe as meltdown became inevitable within ~Z more generations they could, perhaps, actively prepare for recycling?
I can at least IMAGINE this for genomes, because genomes are mostly not Turing Complete.
I know of nothing similar that could be used to make AI with survive-and-spread powers similarly intrinsically safe.
You’re misunderstanding the point of those proposed amino acids. They’re proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn’t going to make cells work better.
There really isn’t much improvement to be had by using different amino acids.
The new aminoacids might be “essential” (not manufacturable internally) and have to come in as “vitamins” potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.
Mostly I have two fundamental points:
1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those… just anything) has to work to give “neo-life” some kind of durable competitive advantage over normal life.
2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random “naturally given” limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentially exponential) growth, either (1) just don’t do it, or else (2) add multiple well-tested purposeful independent default shutdown mechanisms. If you’re “doing it” then look at all your safety mechanisms in a fault tree analysis and if the chance of an error is 1/N then make sure there will definitely not be anything vaguely close to N opportunities for a catastrophe to occur.
That third link seems to be full of woo.
Where was the optimization pressure for better designs supposed to have arisen in the “communal” phase?
They claim that universal horizontal gene transfer (HGT) arose through a “cooperative” mechanism, without saying what that would have looked like at the level of cells, or at the level of some kind of soupy boundary-free chemostat, or something?
They don’t seem to be aware of compensatory mutations or quasi-species or that horizontal transfer is parasitic by default.
Like: where did the new “sloppy but still useful” alleles come from? Why (and how) would any local part of “the communal system” spend energy to generate such things or pass them along starting literally from scratch? This sort of meta-evolutionary cleverness usually requires a long time to arise!
The thing that should be possible (not easy, but possible) only now, with technology, is to invent some new amino acids (leveraging what exists in the biosphere now instead of what was randomly available billions of years ago) AND a new codon system for them, and to boot strap from there, via directed evolution, towards some kind of cohesively viable neolife that (if it turns out to locate a better local optimum than we did) might voraciously consume the current ecology.
The above image is from Figure 2, with the description “Examples of genetically encoded noncanonical amino acids with novel functions” from Schulze’s “Expanding the genetic code”.
Compensatory mutation is actually a pretty interesting and key concept, because it suggests a method by which one might prevent a gray good scenario on purpose, rather than via “mere finger crossing” regarding limitations that we might pray that we will be lucky enough for it to run into by accident.
We could run evolution in silico at the protein level, with large conceptual jumps, and then print something unlikely to be able to evolve.
The “unable to easily evolve” thing might be similar to human telomeres, but more robust.
It could make every generation of neolife to almost entirely a degeneration way from “edenic neolife” and towards a mutational meltdown.
Note that this is essentially an “alignment” or “corrigibility” strategy, but at the level of chemistry and molecular biology, where the hardware is much much easier to reason about, in comparison to the “software” of “planning and optimization processes themselves”.
If you could cause there to be only a 1 in septillion chance of positive or compensatory mutations on purpose (knowing the mechanisms and math to calculate this risk) and put several fully independent booby traps into the system that will fail after a handful of mutations, then you could have the first X generations “eat and double very very efficiently” and then have the colony switch to “doing the task” for Y generations, and then maybe as meltdown became inevitable within ~Z more generations they could, perhaps, actively prepare for recycling?
I can at least IMAGINE this for genomes, because genomes are mostly not Turing Complete.
I know of nothing similar that could be used to make AI with survive-and-spread powers similarly intrinsically safe.
You’re misunderstanding the point of those proposed amino acids. They’re proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn’t going to make cells work better.
There really isn’t much improvement to be had by using different amino acids.
The new aminoacids might be “essential” (not manufacturable internally) and have to come in as “vitamins” potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.
Mostly I have two fundamental points:
1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those… just anything) has to work to give “neo-life” some kind of durable competitive advantage over normal life.
2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random “naturally given” limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentially exponential) growth, either (1) just don’t do it, or else (2) add multiple well-tested purposeful independent default shutdown mechanisms. If you’re “doing it” then look at all your safety mechanisms in a fault tree analysis and if the chance of an error is 1/N then make sure there will definitely not be anything vaguely close to N opportunities for a catastrophe to occur.