I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn’t viable.
As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
More significantly, these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability. In light of the fact that the bacterium is a complex sensor of its environment that can very effectively adapt itself to growth in a broad range of different environments, we should not be surprised that it is not perfectly optimized for any given one of them. Rather, it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible! This is especially the case since we deliberately underestimated the reverse reaction rate with our calculation of phyd, which does not account for the unlikelihood of spontaneously converting carbon dioxide back into oxygen. Thus, a more accurate estimate of the lower bound on β⟨Q⟩ in future may reveal E. coli to be an even more exceptionally well-adapted self-replicator than it currently seems.
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
or erase them (which just moves them to the environment)
I don’t follow this. In what sense is a bit getting moved to the environment?
I previously read deconfusing Landauer’s principle here and… well, I don’t remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: “we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we’ll reliably get a result we interpret as 1. Or it’s up, and 0. Or the potential barrier is down (I’m not sure if this would be a stable state for it), and if we perform that measurement we could get either result.”
But then if we lower the barrier, tilt, and raise the barrier again, we’ve put a bit into the grid but it doesn’t seem to me that we’ve moved the previous bit into the environment.
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”? But that needs Landauer’s principle to see it, and I take the example as being “here’s an intuitive illustration of Landauer’s principle”, in which case it doesn’t seem to work for that. But perhaps I’m misunderstanding something?
(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer’s principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”?
Yes, entropy/information is conserved, so you can’t truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.
Landauer’s principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.
I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn’t viable.
As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
https://aip.scitation.org/doi/10.1063/1.4818538
I don’t follow this. In what sense is a bit getting moved to the environment?
I previously read deconfusing Landauer’s principle here and… well, I don’t remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: “we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we’ll reliably get a result we interpret as 1. Or it’s up, and 0. Or the potential barrier is down (I’m not sure if this would be a stable state for it), and if we perform that measurement we could get either result.”
But then if we lower the barrier, tilt, and raise the barrier again, we’ve put a bit into the grid but it doesn’t seem to me that we’ve moved the previous bit into the environment.
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”? But that needs Landauer’s principle to see it, and I take the example as being “here’s an intuitive illustration of Landauer’s principle”, in which case it doesn’t seem to work for that. But perhaps I’m misunderstanding something?
(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer’s principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
Yes, entropy/information is conserved, so you can’t truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.
Landauer’s principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.