Replication involves copying and thus erasing bits from the environment, not from storage.
The optimal non redundant storage nanobot already exists—a virus. But it’s hardly interesting and regardless the claim I originally made is about Pareto optimality.
Popping out to a meta-level, I am not sure if your aim in these comments is to communicate an idea clearly and defend your claims in a way that’s legible and persuasive to other people?
For me personally, if that is your aim, there are two or three things that would be helpful.
Use widely accepted jargon in ways that clearly (from other people’s perspective) fit the standard definition of those terms. Otherwise, supply a definition, or an unambiguous example.
Make an effort to show how your arguments and claims tie into the larger point you’re trying to make. If the argument is getting away from your original point, explain why, and suggest ways to reorient.
If your conversational partner offers you examples to illustrate their thinking, and you disagree with the examples or interpretation, then try using those examples to make your point. For example, you clearly disagree with some aspect of my previous comment about redundancy, but based on your response, I can’t really discern what you’re disagreeing with or why.
I’m ready to let go of this conversation, but if you’re motivated to make your claims and arguments more legible to me, then I am happy to hear more on the subject. No worries either way.
I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?
To which I replied
Biological cells are robots that must perform myriad physical computations, all of which are tightly constrained by the thermodynamic Landauer Limit. This applies to all the critical operations of cells including DNA/cellular replication, methylation, translation, etc.
You then replied with a tangental thread (from my perspective) about ‘erasing genetic information’, which is not a subgoal of a biological cell (if anything the goal of a biological cell is the exact opposite—to replicate genetic information!)
So let me expand my claim/argument:
A robot is a physical computer built out of atomic widgets: sensors, actuators, connectors, logic gates, ROM, RAM, interconnect/wires, etc. Each of these components is also a physical computer bound by the Landauer limit.
A nanobot/cell in particular is a robot with the unique ability to replicate—to construct a new copy of itself. This requires a large number of bit erasures and thus energy expenditure proportional to the information content of the cell.
Thermodynamic/energy efficiency is mostly a measure of the fundamental widgets themselves. For example in a modern digital computer, the thermodynamic efficiency is a property of the node process, which determines the size, voltage, and electron flow of transistors and interconnect. CMOS chips have increased in thermodynamic efficiency over time ala Moore’s Law.
So then we can look at a biological cell, as a nanbot, and analyze the thermodynamic efficiency of its various elemental computational widgets, which includes DNA to RNA transcription (reading from DNA ROM to RNA RAM cache), computations (various RNA operations, methylation, protein interactions, etc), translation (RNA to proteins) and I provided links to sources establishing that these operations all are efficient down to the Landauer Limit.
Then there is only one other notion of efficiency we may concern ourselves with—which is system level circuit efficiency. I mostly avoided discussing this because it’s more complex to analyze and also largely orthogonal from low level thermodynamic/energy efficiency. For example you could have 2 different circuits that both add 32 bit numbers, and one uses 100k logic gates and the other uses 100M logic gates—and obviously the second circuit uses more energy(assuming the same process node) but that’s really a question of circuity efficiency/inefficiency, not thermodynamic efficiency (which is a process node property).
But that being said, I gave one example of a whole system operation—which is the energy required for a cell to self-replicate during mitosis, and sources indicating this uses near the minimal energy given estimates of the bit entropy of the cell in question (E coli).
So either:
You disagree with my sources that biological cells are near thermodynamically optimal/efficient in their elementary atomic subcomputations (which has nothing to do with your tangental test tube example). If you do disagree here, specify exactly which important atomic subcomputation(s) you believe are not close to optimal
Grant 1.) but disagree that cells are efficient at the circuit level for replication/mitosis, which then necessarily implies that for each biological cell (ie e coli), there is some way to specify a functionally equivalent cell which does all the same things and is just as effective at replicating e coli’s DNA, but also has much lower (ie OOM lower) bit entropy (and thus is probably much smaller, or much less coherent/structured)
You now agree on these key points and we can conclude a successful conversation and move on
The e-coli calculations make no sense to me. They posit huge orders of magnitude differences between an “optimal” silicon based machine and a carbon one (e-coli cell). I attribute this to bogus calculations
The one part I scrutinized: they use equation 7 to estimate the information content of an E-coli bacterium is ~1/2 TB. Now that just sounds absurd to me. That sounds like the amount you’d need to specify the full state of an E-coli at a given point in time (and indeed, that is what equation seven seems to be doing). They then say that E-coli performs the task of forming an atomically precise machine out of a max entropy state, instead of the actual task of “make a functioning e-coli cell, nevermind the exact atomic conditions”, and see how long it would take some kind of gimped silicon computer because “surely silicon machines can’t function in kilo kelvin temperatures?” to do that task. Then they say “oh look, silicon machines are 3 OOM slower than biological cells”.
They then say that E-coli performs the task of forming an atomically precise machine out of a max entropy state, instead of the actual task of “make a functioning e-coli cell, nevermind the exact atomic conditions”, and see how long it would take some kind of gimped silicon computer because “surely silicon machines can’t function in kilo kelvin temperatures?” to do that task. Then they say “oh look, silicon machines are 3 OOM slower than biological cells”.
The methodology they are using to estimate the bit info content of the bio cell is sound, but the values they plug in result in conservative overestimate. A functioning e-coli cell does require atomically precise assembly of at least some components (notably DNA) - but naturally there is some leeway in the exact positioning and dynamic deformation of other components (like the cell wall), etc. But a bio cell is an atomically precise machine, more or less.
They assume 32 bits of xyz spatial position for each component and they assume atoms as the building blocks and they don’t consider alternate configurations, but that seems to be a difference of one or a few OOM, not many.
And indeed from my calc their estimate is 1 OOM from the maximum info content as implied by the cell’s energy dissipation and time for replication (which worked out to 1e11 bits I think). There was another paper linked earlier which used a more detailed methodology and got an estimate of a net energy use of only 6x the lower unreliable landauer bound, which also constrains the true bit content to be in the range of 1e10 to 1e11 bits.
Then they say “oh look, silicon machines are 3 OOM slower than biological cells”.
Not quite, they say “a minimalist serial von neumman silicon machine is 2 OOM slower:
For this, the total time needed to emulate the bio-cell task (i.e., equivalent of 1e11 output bits) will be 510 000 s, which is more than 200 larger than time needed for the bio-cell.��
Their silicon cell is OOM inefficient because: 1.) it is serial rather than parallel, and 2.) it uses digital circuits rather than analog computations
Thanks for taking the time to write this out, it’s a big upgrade in terms of legibility! To be clear, I don’t have a strong opinion on whether or not biological cells are or are not close to being maximum thermodynamic efficiency. Instead, I am claiming that aspects of this discussion need to be better-defined and supported to facilitate productive discussion here.
I’ll just do a shallow dive into a couple aspects.
Finally, the processing costs [of transcription] are low: reading a 2-bit base-pair costs only 1 ATP.
I agree with this source that, if we ignore the energy costs to maintain the cellular architecture that permits transcription, it takes 1 ATP to add 1 rNTP to the growing mRNA chain.
In connecting this to the broader debate about thermodynamic efficiency, however, we have a few different terms and definitions for which I don’t yet see an unambiguous connection.
The Landauer limit, which is defined as the minimum energy cost of deleting 1 bit.
The energy cost of adding 1 rNTP to a growing mRNA chain and thereby (temporarily) copying 1 bit.
The power per rNTP required to maintain a copy of a particular mRNA in the cell, given empirical rates of mRNA decay.
I don’t see a well-grounded way to connect these energy and power requirements for building and maintaining an mRNA molecule to the Landauer limit. So at least as far as mRNA goes, I am not sold on (1).
disagree that cells are efficient at the circuit level for replication/mitosis, which then necessarily implies that for each biological cell (ie e coli), there is some way to specify a functionally equivalent cell which does all the same things and is just as effective at replicating e coli’s DNA, but also has much lower (ie OOM lower) bit entropy (and thus is probably much smaller, or much less coherent/structured)
I’m sure you understand this, but to be clear, “doing all the same things” as a cell would require being a cell. It’s not at all obvious to me why being effective at replicating E. coli’s DNA would be a design requirement for a nanobot. The whole point of building nanobots is to use different mechanisms to accomplish engineering requirements that humans care about. So for example, “can we build a self-replicating nanobot that produces biodiesel in a scalable manner more efficiently than a genetically engineered E. coli cell?” is a natural, if still underdefined, way to think about the relative energy efficiency of E. coli vs nanobots.
Instead, it seems like you are asking whether it’s possible to, say, copy physical DNA into physical mRNA using at least 10x less energy than is needed during synthesis by RNA polymerase. To that, I say “probably not.” However, I also don’t think we can learn anything much from that conclusion about the potential to improve medicine or commercial biosynthesis, in terms of energy costs or any other commercially or medically relevant metric, by using nanobots instead of cells. If you are exclusively concerning yourself with questions like “is there a substantially energetically cheaper way to copy DNA into mRNA,” I will, with let’s say 75% confidence, agree with you that the answer is no.
Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies.
I’m sure you understand this, but to be clear, “doing all the same things” as a cell would require being a cell. It’s not at all obvious to me why being effective at replicating E. coli’s DNA would be a design requirement for a nanobot.
This is why I used the term Pareto optimal and the foundry process analogy. A 32nm node tech is not Pareto optimal—a later node could do pretty much everything it does, only better.
If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology—artificial cells that do everything bio cells do, but OOM better. Most importantly—strong nanotech could replicate Much faster and using Much less energy.
Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity.
If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name.
This relates to the debate about evolution: my prior is that evolution is mysterious, subtle, and superhuman. If you think you found a design flaw, you are probably wrong. This has born out well so far—inverted retina is actually optimal, some photosynthesis is as efficient as efficient solar cells etc
None of this has anything to do with goals other than biological goals. Considerations of human uses of biology are irrelevant
Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies.
This is not a legible argument to me. To make it legible, you would need a person who does not have all the interconnected knowledge that is in your head to be able to examine these sentences and (quickly) understand how these arguments prove the conclusion. N of 1, but I am a biomedical engineering graduate student and I cannot parse this argument. What is “the environment?” What do you mean mechanistically when you say “copying a bit?” What exactly is physically happening when this “bit” is “erased” in the case of, say, adding an rNTP to a growing mRNA chain?
If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology—artificial cells that do everything bio cells do, but OOM better. Most importantly—strong nanotech could replicate Much faster and using Much less energy.
Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity.
If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name.
Here’s another thing you could do to flesh things out:
Describe a specific form of “strong nanotech” that you believe some would view as a main method an AI could use to kill humanity nearly instantly, but that is ruled out based on your belief that biology is Pareto optimal. Obviously, I’m not asking for blueprints. Just a very rough general description, like “nanobots that self-replicate, infect everybody’s bodies, and poison them all simultaneously at a signal from the AI.”
I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn’t viable.
As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
More significantly, these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability. In light of the fact that the bacterium is a complex sensor of its environment that can very effectively adapt itself to growth in a broad range of different environments, we should not be surprised that it is not perfectly optimized for any given one of them. Rather, it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible! This is especially the case since we deliberately underestimated the reverse reaction rate with our calculation of phyd, which does not account for the unlikelihood of spontaneously converting carbon dioxide back into oxygen. Thus, a more accurate estimate of the lower bound on β⟨Q⟩ in future may reveal E. coli to be an even more exceptionally well-adapted self-replicator than it currently seems.
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
or erase them (which just moves them to the environment)
I don’t follow this. In what sense is a bit getting moved to the environment?
I previously read deconfusing Landauer’s principle here and… well, I don’t remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: “we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we’ll reliably get a result we interpret as 1. Or it’s up, and 0. Or the potential barrier is down (I’m not sure if this would be a stable state for it), and if we perform that measurement we could get either result.”
But then if we lower the barrier, tilt, and raise the barrier again, we’ve put a bit into the grid but it doesn’t seem to me that we’ve moved the previous bit into the environment.
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”? But that needs Landauer’s principle to see it, and I take the example as being “here’s an intuitive illustration of Landauer’s principle”, in which case it doesn’t seem to work for that. But perhaps I’m misunderstanding something?
(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer’s principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”?
Yes, entropy/information is conserved, so you can’t truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.
Landauer’s principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.
I want to jump in a provide another reference that supports jacob_cannell’s claim that cells (and RNA replication) operate close to the thermodynamic limit.
>deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids. Statistical physics of self-replication—Jeremy England https://aip.scitation.org/doi/10.1063/1.4818538
There are some caveats that apply if we compare this to different nanobot implementations:
a substrate needing fewer atoms/bonds might be used—then we’d have to assemble fewer atoms and thus need less energy. DNA is already very compact, there is no OOM left to spare, but maybe the rest of the cell content could be improved. As mentioned, for viruses there is really no OOM left.
A heat bath and a solution of needed atoms are assumed. But no reuse of more complicated molecules. Maybe there are sweet spots in engineering space between macroscopic source materials (refined silicon, iron, pure oxygen, etc., as in industrial processes) and a nutrient soup.
This part about function is important, since I don’t think the things we want out of nanotech perfectly overlap with biology itself, and that can cause energy efficiency to increase or decrease.
Replication involves copying and thus erasing bits from the environment, not from storage.
The optimal non redundant storage nanobot already exists—a virus. But it’s hardly interesting and regardless the claim I originally made is about Pareto optimality.
Popping out to a meta-level, I am not sure if your aim in these comments is to communicate an idea clearly and defend your claims in a way that’s legible and persuasive to other people?
For me personally, if that is your aim, there are two or three things that would be helpful.
Use widely accepted jargon in ways that clearly (from other people’s perspective) fit the standard definition of those terms. Otherwise, supply a definition, or an unambiguous example.
Make an effort to show how your arguments and claims tie into the larger point you’re trying to make. If the argument is getting away from your original point, explain why, and suggest ways to reorient.
If your conversational partner offers you examples to illustrate their thinking, and you disagree with the examples or interpretation, then try using those examples to make your point. For example, you clearly disagree with some aspect of my previous comment about redundancy, but based on your response, I can’t really discern what you’re disagreeing with or why.
I’m ready to let go of this conversation, but if you’re motivated to make your claims and arguments more legible to me, then I am happy to hear more on the subject. No worries either way.
Upstream this subthread started when the OP said:
To which I replied
You then replied with a tangental thread (from my perspective) about ‘erasing genetic information’, which is not a subgoal of a biological cell (if anything the goal of a biological cell is the exact opposite—to replicate genetic information!)
So let me expand my claim/argument:
A robot is a physical computer built out of atomic widgets: sensors, actuators, connectors, logic gates, ROM, RAM, interconnect/wires, etc. Each of these components is also a physical computer bound by the Landauer limit.
A nanobot/cell in particular is a robot with the unique ability to replicate—to construct a new copy of itself. This requires a large number of bit erasures and thus energy expenditure proportional to the information content of the cell.
Thermodynamic/energy efficiency is mostly a measure of the fundamental widgets themselves. For example in a modern digital computer, the thermodynamic efficiency is a property of the node process, which determines the size, voltage, and electron flow of transistors and interconnect. CMOS chips have increased in thermodynamic efficiency over time ala Moore’s Law.
So then we can look at a biological cell, as a nanbot, and analyze the thermodynamic efficiency of its various elemental computational widgets, which includes DNA to RNA transcription (reading from DNA ROM to RNA RAM cache), computations (various RNA operations, methylation, protein interactions, etc), translation (RNA to proteins) and I provided links to sources establishing that these operations all are efficient down to the Landauer Limit.
Then there is only one other notion of efficiency we may concern ourselves with—which is system level circuit efficiency. I mostly avoided discussing this because it’s more complex to analyze and also largely orthogonal from low level thermodynamic/energy efficiency. For example you could have 2 different circuits that both add 32 bit numbers, and one uses 100k logic gates and the other uses 100M logic gates—and obviously the second circuit uses more energy(assuming the same process node) but that’s really a question of circuity efficiency/inefficiency, not thermodynamic efficiency (which is a process node property).
But that being said, I gave one example of a whole system operation—which is the energy required for a cell to self-replicate during mitosis, and sources indicating this uses near the minimal energy given estimates of the bit entropy of the cell in question (E coli).
So either:
You disagree with my sources that biological cells are near thermodynamically optimal/efficient in their elementary atomic subcomputations (which has nothing to do with your tangental test tube example). If you do disagree here, specify exactly which important atomic subcomputation(s) you believe are not close to optimal
Grant 1.) but disagree that cells are efficient at the circuit level for replication/mitosis, which then necessarily implies that for each biological cell (ie e coli), there is some way to specify a functionally equivalent cell which does all the same things and is just as effective at replicating e coli’s DNA, but also has much lower (ie OOM lower) bit entropy (and thus is probably much smaller, or much less coherent/structured)
You now agree on these key points and we can conclude a successful conversation and move on
My standard for efficiency differences is OOM
The e-coli calculations make no sense to me. They posit huge orders of magnitude differences between an “optimal” silicon based machine and a carbon one (e-coli cell). I attribute this to bogus calculations
The one part I scrutinized: they use equation 7 to estimate the information content of an E-coli bacterium is ~1/2 TB. Now that just sounds absurd to me. That sounds like the amount you’d need to specify the full state of an E-coli at a given point in time (and indeed, that is what equation seven seems to be doing). They then say that E-coli performs the task of forming an atomically precise machine out of a max entropy state, instead of the actual task of “make a functioning e-coli cell, nevermind the exact atomic conditions”, and see how long it would take some kind of gimped silicon computer because “surely silicon machines can’t function in kilo kelvin temperatures?” to do that task. Then they say “oh look, silicon machines are 3 OOM slower than biological cells”.
The methodology they are using to estimate the bit info content of the bio cell is sound, but the values they plug in result in conservative overestimate. A functioning e-coli cell does require atomically precise assembly of at least some components (notably DNA) - but naturally there is some leeway in the exact positioning and dynamic deformation of other components (like the cell wall), etc. But a bio cell is an atomically precise machine, more or less.
They assume 32 bits of xyz spatial position for each component and they assume atoms as the building blocks and they don’t consider alternate configurations, but that seems to be a difference of one or a few OOM, not many.
And indeed from my calc their estimate is 1 OOM from the maximum info content as implied by the cell’s energy dissipation and time for replication (which worked out to 1e11 bits I think). There was another paper linked earlier which used a more detailed methodology and got an estimate of a net energy use of only 6x the lower unreliable landauer bound, which also constrains the true bit content to be in the range of 1e10 to 1e11 bits.
Not quite, they say “a minimalist serial von neumman silicon machine is 2 OOM slower:
Their silicon cell is OOM inefficient because: 1.) it is serial rather than parallel, and 2.) it uses digital circuits rather than analog computations
Thanks for taking the time to write this out, it’s a big upgrade in terms of legibility! To be clear, I don’t have a strong opinion on whether or not biological cells are or are not close to being maximum thermodynamic efficiency. Instead, I am claiming that aspects of this discussion need to be better-defined and supported to facilitate productive discussion here.
I’ll just do a shallow dive into a couple aspects.
Here’s a quote from one of your sources:
I agree with this source that, if we ignore the energy costs to maintain the cellular architecture that permits transcription, it takes 1 ATP to add 1 rNTP to the growing mRNA chain.
In connecting this to the broader debate about thermodynamic efficiency, however, we have a few different terms and definitions for which I don’t yet see an unambiguous connection.
The Landauer limit, which is defined as the minimum energy cost of deleting 1 bit.
The energy cost of adding 1 rNTP to a growing mRNA chain and thereby (temporarily) copying 1 bit.
The power per rNTP required to maintain a copy of a particular mRNA in the cell, given empirical rates of mRNA decay.
I don’t see a well-grounded way to connect these energy and power requirements for building and maintaining an mRNA molecule to the Landauer limit. So at least as far as mRNA goes, I am not sold on (1).
I’m sure you understand this, but to be clear, “doing all the same things” as a cell would require being a cell. It’s not at all obvious to me why being effective at replicating E. coli’s DNA would be a design requirement for a nanobot. The whole point of building nanobots is to use different mechanisms to accomplish engineering requirements that humans care about. So for example, “can we build a self-replicating nanobot that produces biodiesel in a scalable manner more efficiently than a genetically engineered E. coli cell?” is a natural, if still underdefined, way to think about the relative energy efficiency of E. coli vs nanobots.
Instead, it seems like you are asking whether it’s possible to, say, copy physical DNA into physical mRNA using at least 10x less energy than is needed during synthesis by RNA polymerase. To that, I say “probably not.” However, I also don’t think we can learn anything much from that conclusion about the potential to improve medicine or commercial biosynthesis, in terms of energy costs or any other commercially or medically relevant metric, by using nanobots instead of cells. If you are exclusively concerning yourself with questions like “is there a substantially energetically cheaper way to copy DNA into mRNA,” I will, with let’s say 75% confidence, agree with you that the answer is no.
Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies.
This is why I used the term Pareto optimal and the foundry process analogy. A 32nm node tech is not Pareto optimal—a later node could do pretty much everything it does, only better.
If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology—artificial cells that do everything bio cells do, but OOM better. Most importantly—strong nanotech could replicate Much faster and using Much less energy.
Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity.
If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name.
This relates to the debate about evolution: my prior is that evolution is mysterious, subtle, and superhuman. If you think you found a design flaw, you are probably wrong. This has born out well so far—inverted retina is actually optimal, some photosynthesis is as efficient as efficient solar cells etc
None of this has anything to do with goals other than biological goals. Considerations of human uses of biology are irrelevant
This is not a legible argument to me. To make it legible, you would need a person who does not have all the interconnected knowledge that is in your head to be able to examine these sentences and (quickly) understand how these arguments prove the conclusion. N of 1, but I am a biomedical engineering graduate student and I cannot parse this argument. What is “the environment?” What do you mean mechanistically when you say “copying a bit?” What exactly is physically happening when this “bit” is “erased” in the case of, say, adding an rNTP to a growing mRNA chain?
Here’s another thing you could do to flesh things out:
Describe a specific form of “strong nanotech” that you believe some would view as a main method an AI could use to kill humanity nearly instantly, but that is ruled out based on your belief that biology is Pareto optimal. Obviously, I’m not asking for blueprints. Just a very rough general description, like “nanobots that self-replicate, infect everybody’s bodies, and poison them all simultaneously at a signal from the AI.”
I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn’t viable.
As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
https://aip.scitation.org/doi/10.1063/1.4818538
I don’t follow this. In what sense is a bit getting moved to the environment?
I previously read deconfusing Landauer’s principle here and… well, I don’t remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: “we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we’ll reliably get a result we interpret as 1. Or it’s up, and 0. Or the potential barrier is down (I’m not sure if this would be a stable state for it), and if we perform that measurement we could get either result.”
But then if we lower the barrier, tilt, and raise the barrier again, we’ve put a bit into the grid but it doesn’t seem to me that we’ve moved the previous bit into the environment.
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”? But that needs Landauer’s principle to see it, and I take the example as being “here’s an intuitive illustration of Landauer’s principle”, in which case it doesn’t seem to work for that. But perhaps I’m misunderstanding something?
(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer’s principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
Yes, entropy/information is conserved, so you can’t truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.
Landauer’s principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.
I want to jump in a provide another reference that supports jacob_cannell’s claim that cells (and RNA replication) operate close to the thermodynamic limit.
There are some caveats that apply if we compare this to different nanobot implementations:
a substrate needing fewer atoms/bonds might be used—then we’d have to assemble fewer atoms and thus need less energy. DNA is already very compact, there is no OOM left to spare, but maybe the rest of the cell content could be improved. As mentioned, for viruses there is really no OOM left.
A heat bath and a solution of needed atoms are assumed. But no reuse of more complicated molecules. Maybe there are sweet spots in engineering space between macroscopic source materials (refined silicon, iron, pure oxygen, etc., as in industrial processes) and a nutrient soup.
This part about function is important, since I don’t think the things we want out of nanotech perfectly overlap with biology itself, and that can cause energy efficiency to increase or decrease.
My comment above addresses this