self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior
I think Wet Nanotech might qualify then.
Consider a minor modification to a natural microbe: a different genetic code. I.e., a codon still codes for an amino acid, but which corresponds to which could differ. (This correspondence is universal in natural life, with a few small exceptions.) Such an organism would effectively be immune to all of the viruses that would affect its natural counterpart, and no horizontal gene transfer to natural life would be possible.
One could also imagine further modifications. Greater resistance to mutations, perhaps using a more stable XNA and more repair genes. More types of amino acids. Reversed chirality of various biomolecules as compared to natural life, etc. Such an organism (with the appropriate enzymes) could digest natural life, but not the reverse.
There’s nothing here that seems fundamentally incompatible with our understanding of biochemistry, but with enough of these changes, such an organism might then become an invasive species with a massive competitive advantage over natural life, ultimately resulting in an ecophagy scenario.
Where was the optimization pressure for better designs supposed to have arisen in the “communal” phase?
Thus, we may speculate that the emergence of life should best be viewed in three phases, distinguished by the nature of their evolutionary dynamics. In the first phase, treated in the present article, life was very robust to ambiguity, but there was no fully unified innovation-sharing protocol. The ambiguity in this stage led inexorably to a dynamic from which a universal and optimized innovation-sharing protocol emerged, through a cooperative mechanism. In the second phase, the community rapidly developed complexity through the frictionless exchange of novelty enabled by the genetic code, a dynamic we recognize to be patently Lamarckian (19). With the increasing level of complexity there arose necessarily a lower tolerance of ambiguity, leading finally to a transition to a state wherein communal dynamics had to be suppressed and refinement superseded innovation. This Darwinian transition led to the third phase, which was dominated by vertical descent and characterized by the slow and tempered accumulation of complexity.
They claim that universal horizontal gene transfer (HGT) arose through a “cooperative” mechanism, without saying what that would have looked like at the level of cells, or at the level of some kind of soupy boundary-free chemostat, or something?
Like: where did the new “sloppy but still useful” alleles come from? Why (and how) would any local part of “the communal system” spend energy to generate such things or pass them along starting literally from scratch? This sort of meta-evolutionary cleverness usually requires a long time to arise!
The thing that should be possible (not easy, but possible) only now, with technology, is to invent some new amino acids (leveraging what exists in the biosphere now instead of what was randomly available billions of years ago) AND a new codon system for them, and to boot strap from there, via directed evolution, towards some kind of cohesively viable neolife that (if it turns out to locate a better local optimum than we did) might voraciously consume the current ecology.
The above image is from Figure 2, with the description “Examples of genetically encoded noncanonical amino acids with novel functions” from Schulze’s “Expanding the genetic code”.
Compensatory mutation is actually a pretty interesting and key concept, because it suggests a method by which one might prevent a gray good scenario on purpose, rather than via “mere finger crossing” regarding limitations that we might pray that we will be lucky enough for it to run into by accident.
We could run evolution in silico at the protein level, with large conceptual jumps, and then print something unlikely to be able to evolve.
The “unable to easily evolve” thing might be similar to human telomeres, but more robust.
It could make every generation of neolife to almost entirely a degeneration way from “edenic neolife” and towards a mutational meltdown.
Note that this is essentially an “alignment” or “corrigibility” strategy, but at the level of chemistry and molecular biology, where the hardware is much much easier to reason about, in comparison to the “software” of “planning and optimization processes themselves”.
If you could cause there to be only a 1 in septillion chance of positive or compensatory mutations on purpose (knowing the mechanisms and math to calculate this risk) and put several fully independent booby traps into the system that will fail after a handful of mutations, then you could have the first X generations “eat and double very very efficiently” and then have the colony switch to “doing the task” for Y generations, and then maybe as meltdown became inevitable within ~Z more generations they could, perhaps, actively prepare for recycling?
I can at least IMAGINE this for genomes, because genomes are mostly not Turing Complete.
I know of nothing similar that could be used to make AI with survive-and-spread powers similarly intrinsically safe.
You’re misunderstanding the point of those proposed amino acids. They’re proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn’t going to make cells work better.
There really isn’t much improvement to be had by using different amino acids.
The new aminoacids might be “essential” (not manufacturable internally) and have to come in as “vitamins” potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.
Mostly I have two fundamental points:
1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those… just anything) has to work to give “neo-life” some kind of durable competitive advantage over normal life.
2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random “naturally given” limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentially exponential) growth, either (1) just don’t do it, or else (2) add multiple well-tested purposeful independent default shutdown mechanisms. If you’re “doing it” then look at all your safety mechanisms in a fault tree analysis and if the chance of an error is 1/N then make sure there will definitely not be anything vaguely close to N opportunities for a catastrophe to occur.
I am also for Wet Nanotech. But different genetic code is not needed or it is not an important thing.
The main thing is to put a Turing computer inside a living cell similar to E.Coli and to create a way for two-side communication with externals computer. Such computer should be genetically encoded, so if the cell replicates the computer also replicates. The computer has to have the ability to get data from some sensors inside the cells and output some proteins.
Building such Wet Nanotech is orders of magnitude simpler than real nanotech.
The main obstacle for AI is the need to perform real-world experiments. In classical EY paradigm, first AI is so superintelligent that it does not need to perform any experiments, as it could guess everything about the real world and will do everything right from the first attempt. But if first AI is still limited by amount of available compute, or its intelligence or some critical data, it has to run tests.
Running experiments is longer and AI is more likely to be cought in its first attempts. This will slower its ascend and may force it to choose the ways there it cooperates with humans for longer periods of time.
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it’s conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that’s where the name came from.
Have you heard of the Arc protein? It’s conceivable that it’s responsible for transmitting digital information in the brain, like, if that were useful, it would be doing that, so I’d expect to see computation too.
I sometimes wonder if this is openworm’s missing piece. But it’s not my field.
That is so freakin’ cool. Thank you for this link. Hadn’t heard about this yet…
...and yes, memory consolidation is on my list as “very important” for uploading people to get a result where the ems are still definitely “full people (with all the features which give presumptive confidence that the features are ‘sufficient for personhood’ because the list as been constructed in a minimalist way such that the absence of a sufficient feature ‘breaking personhood’, then not even normal healthy humans are ‘people’)”.
There is a new paper by Jeremy England that seems relevant:
Self-organized computation in the far-from-equilibrium cell
Recent progress in our understanding of the physics of self-organization in active matter has pointed to the possibility of spontaneous collective behaviors that effectively compute things about the patterns in the surrounding patterned environment.
I think Wet Nanotech might qualify then.
Consider a minor modification to a natural microbe: a different genetic code. I.e., a codon still codes for an amino acid, but which corresponds to which could differ. (This correspondence is universal in natural life, with a few small exceptions.) Such an organism would effectively be immune to all of the viruses that would affect its natural counterpart, and no horizontal gene transfer to natural life would be possible.
One could also imagine further modifications. Greater resistance to mutations, perhaps using a more stable XNA and more repair genes. More types of amino acids. Reversed chirality of various biomolecules as compared to natural life, etc. Such an organism (with the appropriate enzymes) could digest natural life, but not the reverse.
There’s nothing here that seems fundamentally incompatible with our understanding of biochemistry, but with enough of these changes, such an organism might then become an invasive species with a massive competitive advantage over natural life, ultimately resulting in an ecophagy scenario.
That has already happened naturally and also already been done artificially.
See this paper for reasons why codons are almost universal.
That third link seems to be full of woo.
Where was the optimization pressure for better designs supposed to have arisen in the “communal” phase?
They claim that universal horizontal gene transfer (HGT) arose through a “cooperative” mechanism, without saying what that would have looked like at the level of cells, or at the level of some kind of soupy boundary-free chemostat, or something?
They don’t seem to be aware of compensatory mutations or quasi-species or that horizontal transfer is parasitic by default.
Like: where did the new “sloppy but still useful” alleles come from? Why (and how) would any local part of “the communal system” spend energy to generate such things or pass them along starting literally from scratch? This sort of meta-evolutionary cleverness usually requires a long time to arise!
The thing that should be possible (not easy, but possible) only now, with technology, is to invent some new amino acids (leveraging what exists in the biosphere now instead of what was randomly available billions of years ago) AND a new codon system for them, and to boot strap from there, via directed evolution, towards some kind of cohesively viable neolife that (if it turns out to locate a better local optimum than we did) might voraciously consume the current ecology.
The above image is from Figure 2, with the description “Examples of genetically encoded noncanonical amino acids with novel functions” from Schulze’s “Expanding the genetic code”.
Compensatory mutation is actually a pretty interesting and key concept, because it suggests a method by which one might prevent a gray good scenario on purpose, rather than via “mere finger crossing” regarding limitations that we might pray that we will be lucky enough for it to run into by accident.
We could run evolution in silico at the protein level, with large conceptual jumps, and then print something unlikely to be able to evolve.
The “unable to easily evolve” thing might be similar to human telomeres, but more robust.
It could make every generation of neolife to almost entirely a degeneration way from “edenic neolife” and towards a mutational meltdown.
Note that this is essentially an “alignment” or “corrigibility” strategy, but at the level of chemistry and molecular biology, where the hardware is much much easier to reason about, in comparison to the “software” of “planning and optimization processes themselves”.
If you could cause there to be only a 1 in septillion chance of positive or compensatory mutations on purpose (knowing the mechanisms and math to calculate this risk) and put several fully independent booby traps into the system that will fail after a handful of mutations, then you could have the first X generations “eat and double very very efficiently” and then have the colony switch to “doing the task” for Y generations, and then maybe as meltdown became inevitable within ~Z more generations they could, perhaps, actively prepare for recycling?
I can at least IMAGINE this for genomes, because genomes are mostly not Turing Complete.
I know of nothing similar that could be used to make AI with survive-and-spread powers similarly intrinsically safe.
You’re misunderstanding the point of those proposed amino acids. They’re proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn’t going to make cells work better.
There really isn’t much improvement to be had by using different amino acids.
The new aminoacids might be “essential” (not manufacturable internally) and have to come in as “vitamins” potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.
Mostly I have two fundamental points:
1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those… just anything) has to work to give “neo-life” some kind of durable competitive advantage over normal life.
2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random “naturally given” limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentially exponential) growth, either (1) just don’t do it, or else (2) add multiple well-tested purposeful independent default shutdown mechanisms. If you’re “doing it” then look at all your safety mechanisms in a fault tree analysis and if the chance of an error is 1/N then make sure there will definitely not be anything vaguely close to N opportunities for a catastrophe to occur.
I am also for Wet Nanotech. But different genetic code is not needed or it is not an important thing.
The main thing is to put a Turing computer inside a living cell similar to E.Coli and to create a way for two-side communication with externals computer. Such computer should be genetically encoded, so if the cell replicates the computer also replicates. The computer has to have the ability to get data from some sensors inside the cells and output some proteins.
Building such Wet Nanotech is orders of magnitude simpler than real nanotech.
The main obstacle for AI is the need to perform real-world experiments. In classical EY paradigm, first AI is so superintelligent that it does not need to perform any experiments, as it could guess everything about the real world and will do everything right from the first attempt. But if first AI is still limited by amount of available compute, or its intelligence or some critical data, it has to run tests.
Running experiments is longer and AI is more likely to be cought in its first attempts. This will slower its ascend and may force it to choose the ways there it cooperates with humans for longer periods of time.
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it’s conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that’s where the name came from.
No, I didn’t mean brains. I mean digital computers inside the cell; but they can use all the ways of error-correction including parallelism.
Have you heard of the Arc protein? It’s conceivable that it’s responsible for transmitting digital information in the brain, like, if that were useful, it would be doing that, so I’d expect to see computation too.
I sometimes wonder if this is openworm’s missing piece. But it’s not my field.
That is so freakin’ cool. Thank you for this link. Hadn’t heard about this yet…
...and yes, memory consolidation is on my list as “very important” for uploading people to get a result where the ems are still definitely “full people (with all the features which give presumptive confidence that the features are ‘sufficient for personhood’ because the list as been constructed in a minimalist way such that the absence of a sufficient feature ‘breaking personhood’, then not even normal healthy humans are ‘people’)”.
There is a new paper by Jeremy England that seems relevant:
Self-organized computation in the far-from-equilibrium cell
https://aip.scitation.org/doi/full/10.1063/5.0103151