This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
More significantly, these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability. In light of the fact that the bacterium is a complex sensor of its environment that can very effectively adapt itself to growth in a broad range of different environments, we should not be surprised that it is not perfectly optimized for any given one of them. Rather, it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible! This is especially the case since we deliberately underestimated the reverse reaction rate with our calculation of phyd, which does not account for the unlikelihood of spontaneously converting carbon dioxide back into oxygen. Thus, a more accurate estimate of the lower bound on β⟨Q⟩ in future may reveal E. coli to be an even more exceptionally well-adapted self-replicator than it currently seems.
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
https://aip.scitation.org/doi/10.1063/1.4818538