I can’t help but wonder just how huge that space of living states is, and how many of them correspond to normal cell types or cell states in such a mammoth, and how intractable it would be to find that one set of states that corresponds to ‘mammoth oocyte’ and produces a self-perpetuating multicellular system.
I wouldn’t overestimate its additional complexity, especially given that most of it ultimately derives from the relationship of different areas in the DNA sequence itself. For the predictability of results with varying slightly different states, confer e.g. the success and predictability in results (on a viral level, not a clinical results level) from manipulating lentivirii and AAVs, see for example this NEJM paper.
No physics-level simulation needed to accurately predict what a cell will do when switching out parts of its genome.
If it were otherwise (if you think about it), the whole natural virus ecosystem itself would break down.
EDIT: A different example that come to mind: Insulin which is synthesized in a laboratory strain of Escherichia coli bacteria which has been genetically altered with recombinant DNA to produce biosynthetic human insulin. No surprises there either.
My main response there is that in those situations, you are making one small change to a pre-existing system using elements that have previously been qualitatively characterized. In the case of the viral gene therapy, it’s a case of adding to the cell a DNA construct consisting of a crippled virus that can’t actually replicate in normal cells for insertion purposes, and a promoter element that turns on a reading frame next to it in any human cellular context along with the reading frame for the gene in question which has had all the frills and splice sites removed. In the case of insulin in bacteria, it’s a case of adding to the bacteria the human insulin reading frame and a few processing enzyme reading frames, each attached to a constantly-on bacterial promoter element. The overall systems of the cells are left intact, and you are just piggybacking on them.
You can do things like this because in many living systems you have elements that have been isolated and tested, and which you can say “if I stick this in, it will do X”. That has for the most part for a long time been figured out empirically, by putting into cells elements that are whole truncated, or mutated in some way and seeing which ones work and which ones dont. These days examining their chemical structures we have physical and chemical explanations for a bunch of them and how they work and are starting to get better at predicting them in particular organismal contexts, though it’s still much much harder in multicellular creatures with huge genomes than in those with compact genomes*.
When I was saying that physics-like things were needed, I was more referring to a situation in which you do not have a pre-existing living thing and are trying to work out what an element does only from its sequence. When you can test things in the correct context and start figuring out what proteins and DNA elements are important for what you can leap over this and tell what is important for what even before you really understand the physical reasons. if you were starting from just the DNA sequence and didn’t really understand what the non-DNA context for them was or possibly even how the DNA helps produce the non-DNA context, you get a much less tractible problem.
*(It’s worth noting that the ease of analysis of noncoding elements is wildly different in different organisms. Bacteria and yeast have compact promoter elements that have DNA sequences of dozens and dozens-to-hundreds of base pairs each, often with easily identifiable protein binding sites, while in animals a promoter element can be in chunks strewn across hundreds of kilobases (though severa kilobases is more typical) and is usually defined as ‘this is the smallest piece of DNA we could include and still get it to express properly’ with only a subset of computationally predicted protein binding sites actually turning out to be functionally important. A yeast centromere element for fiber attachment to chromosomes during cell division is a precisely defined 125 base pair sequence that assembles a complex of anchoring proteins on itself, while a human centromere can be the size of an entire yeast genome and is a huge array of short repeats that might just bind the fiber anchoring proteins a little bit better than random DNA. Noncoding elements get larger and less straightforward much faster than coding elements as genome size increases.)
EDIT: as for viral ecosystems, viruses can hop from species to species because related species share a lot of cellular machinery, even going back when splits happened hundreds of millions of years ago, and the virus just has to work well enough (and will immediately start adapting to its new host). Seeing as life is more than three gigayears old though, there are indeed barriers that viruses cannot cross. You will not find a virus that can infect both a bacterium and a mammal, or a mammal and a plant. When they hop from species to species or population to population the differences can render some species resistant, or change the end behavior of the virus, and you get things like the simian immunodeficiency virus hardly affecting chimps and the only-separated-by-a-century-from-it HIV killing its human host.
if you were starting from just the DNA sequence and didn’t really understand what the non-DNA context for them was or possibly even how the DNA helps produce the non-DNA context, you get a much less tractible problem.
I can’t help but feel this is related to (what I perceive as) a vast overrating of the plausibility of uploading from cryonically-preserved brain remnants. It’s late at night and I’m still woozy from finals, but it feels like someone who’s discovered they enjoy, say, classical music without much grasp of music theory or even the knowledge of how to play any instruments figuring it can’t be too hard to just brute-force a piano riff of, say, the fourth movement of Beethoven’s 9th if they just figure out by listening which notes to play. The mistake being made is a subtler and yet more important one than simply underestimating the algorithmic complexity of the desired output.
double post
I wouldn’t overestimate its additional complexity, especially given that most of it ultimately derives from the relationship of different areas in the DNA sequence itself. For the predictability of results with varying slightly different states, confer e.g. the success and predictability in results (on a viral level, not a clinical results level) from manipulating lentivirii and AAVs, see for example this NEJM paper.
No physics-level simulation needed to accurately predict what a cell will do when switching out parts of its genome.
If it were otherwise (if you think about it), the whole natural virus ecosystem itself would break down.
EDIT: A different example that come to mind: Insulin which is synthesized in a laboratory strain of Escherichia coli bacteria which has been genetically altered with recombinant DNA to produce biosynthetic human insulin. No surprises there either.
My main response there is that in those situations, you are making one small change to a pre-existing system using elements that have previously been qualitatively characterized. In the case of the viral gene therapy, it’s a case of adding to the cell a DNA construct consisting of a crippled virus that can’t actually replicate in normal cells for insertion purposes, and a promoter element that turns on a reading frame next to it in any human cellular context along with the reading frame for the gene in question which has had all the frills and splice sites removed. In the case of insulin in bacteria, it’s a case of adding to the bacteria the human insulin reading frame and a few processing enzyme reading frames, each attached to a constantly-on bacterial promoter element. The overall systems of the cells are left intact, and you are just piggybacking on them.
You can do things like this because in many living systems you have elements that have been isolated and tested, and which you can say “if I stick this in, it will do X”. That has for the most part for a long time been figured out empirically, by putting into cells elements that are whole truncated, or mutated in some way and seeing which ones work and which ones dont. These days examining their chemical structures we have physical and chemical explanations for a bunch of them and how they work and are starting to get better at predicting them in particular organismal contexts, though it’s still much much harder in multicellular creatures with huge genomes than in those with compact genomes*.
When I was saying that physics-like things were needed, I was more referring to a situation in which you do not have a pre-existing living thing and are trying to work out what an element does only from its sequence. When you can test things in the correct context and start figuring out what proteins and DNA elements are important for what you can leap over this and tell what is important for what even before you really understand the physical reasons. if you were starting from just the DNA sequence and didn’t really understand what the non-DNA context for them was or possibly even how the DNA helps produce the non-DNA context, you get a much less tractible problem.
*(It’s worth noting that the ease of analysis of noncoding elements is wildly different in different organisms. Bacteria and yeast have compact promoter elements that have DNA sequences of dozens and dozens-to-hundreds of base pairs each, often with easily identifiable protein binding sites, while in animals a promoter element can be in chunks strewn across hundreds of kilobases (though severa kilobases is more typical) and is usually defined as ‘this is the smallest piece of DNA we could include and still get it to express properly’ with only a subset of computationally predicted protein binding sites actually turning out to be functionally important. A yeast centromere element for fiber attachment to chromosomes during cell division is a precisely defined 125 base pair sequence that assembles a complex of anchoring proteins on itself, while a human centromere can be the size of an entire yeast genome and is a huge array of short repeats that might just bind the fiber anchoring proteins a little bit better than random DNA. Noncoding elements get larger and less straightforward much faster than coding elements as genome size increases.)
EDIT: as for viral ecosystems, viruses can hop from species to species because related species share a lot of cellular machinery, even going back when splits happened hundreds of millions of years ago, and the virus just has to work well enough (and will immediately start adapting to its new host). Seeing as life is more than three gigayears old though, there are indeed barriers that viruses cannot cross. You will not find a virus that can infect both a bacterium and a mammal, or a mammal and a plant. When they hop from species to species or population to population the differences can render some species resistant, or change the end behavior of the virus, and you get things like the simian immunodeficiency virus hardly affecting chimps and the only-separated-by-a-century-from-it HIV killing its human host.
I can’t help but feel this is related to (what I perceive as) a vast overrating of the plausibility of uploading from cryonically-preserved brain remnants. It’s late at night and I’m still woozy from finals, but it feels like someone who’s discovered they enjoy, say, classical music without much grasp of music theory or even the knowledge of how to play any instruments figuring it can’t be too hard to just brute-force a piano riff of, say, the fourth movement of Beethoven’s 9th if they just figure out by listening which notes to play. The mistake being made is a subtler and yet more important one than simply underestimating the algorithmic complexity of the desired output.