Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies.
This is not a legible argument to me. To make it legible, you would need a person who does not have all the interconnected knowledge that is in your head to be able to examine these sentences and (quickly) understand how these arguments prove the conclusion. N of 1, but I am a biomedical engineering graduate student and I cannot parse this argument. What is “the environment?” What do you mean mechanistically when you say “copying a bit?” What exactly is physically happening when this “bit” is “erased” in the case of, say, adding an rNTP to a growing mRNA chain?
If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology—artificial cells that do everything bio cells do, but OOM better. Most importantly—strong nanotech could replicate Much faster and using Much less energy.
Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity.
If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name.
Here’s another thing you could do to flesh things out:
Describe a specific form of “strong nanotech” that you believe some would view as a main method an AI could use to kill humanity nearly instantly, but that is ruled out based on your belief that biology is Pareto optimal. Obviously, I’m not asking for blueprints. Just a very rough general description, like “nanobots that self-replicate, infect everybody’s bodies, and poison them all simultaneously at a signal from the AI.”
I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn’t viable.
As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
More significantly, these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability. In light of the fact that the bacterium is a complex sensor of its environment that can very effectively adapt itself to growth in a broad range of different environments, we should not be surprised that it is not perfectly optimized for any given one of them. Rather, it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible! This is especially the case since we deliberately underestimated the reverse reaction rate with our calculation of phyd, which does not account for the unlikelihood of spontaneously converting carbon dioxide back into oxygen. Thus, a more accurate estimate of the lower bound on β⟨Q⟩ in future may reveal E. coli to be an even more exceptionally well-adapted self-replicator than it currently seems.
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
or erase them (which just moves them to the environment)
I don’t follow this. In what sense is a bit getting moved to the environment?
I previously read deconfusing Landauer’s principle here and… well, I don’t remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: “we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we’ll reliably get a result we interpret as 1. Or it’s up, and 0. Or the potential barrier is down (I’m not sure if this would be a stable state for it), and if we perform that measurement we could get either result.”
But then if we lower the barrier, tilt, and raise the barrier again, we’ve put a bit into the grid but it doesn’t seem to me that we’ve moved the previous bit into the environment.
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”? But that needs Landauer’s principle to see it, and I take the example as being “here’s an intuitive illustration of Landauer’s principle”, in which case it doesn’t seem to work for that. But perhaps I’m misunderstanding something?
(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer’s principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”?
Yes, entropy/information is conserved, so you can’t truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.
Landauer’s principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.
This is not a legible argument to me. To make it legible, you would need a person who does not have all the interconnected knowledge that is in your head to be able to examine these sentences and (quickly) understand how these arguments prove the conclusion. N of 1, but I am a biomedical engineering graduate student and I cannot parse this argument. What is “the environment?” What do you mean mechanistically when you say “copying a bit?” What exactly is physically happening when this “bit” is “erased” in the case of, say, adding an rNTP to a growing mRNA chain?
Here’s another thing you could do to flesh things out:
Describe a specific form of “strong nanotech” that you believe some would view as a main method an AI could use to kill humanity nearly instantly, but that is ruled out based on your belief that biology is Pareto optimal. Obviously, I’m not asking for blueprints. Just a very rough general description, like “nanobots that self-replicate, infect everybody’s bodies, and poison them all simultaneously at a signal from the AI.”
I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn’t viable.
As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.
This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.
If I’m following you, “delete” in the case of mRNA assembly would means that we have “erased” one rNTP from the solution, then “written” it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the “delete” part of this operation.
You are saying that since 1 high energy P bond (~1 ATP) is all that’s required to do not only the “delete,” but also the “write,” and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there’s relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.
As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it’s possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.
If I am interpreting you correctly so far, then I think there are several points to be made.
There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.
In the vein of considering our appetite for disagreement, now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there’s not much room to converge, because I don’t have the time to devote to this specific research topic.
All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.
If anything I’d say the opposite is true—inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.
I don’t know quite what you are referring to here, but i’m guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.
That paper Gunnar found analyzes replication efficiency in more depth:
I haven’t read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.
https://aip.scitation.org/doi/10.1063/1.4818538
I don’t follow this. In what sense is a bit getting moved to the environment?
I previously read deconfusing Landauer’s principle here and… well, I don’t remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: “we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we’ll reliably get a result we interpret as 1. Or it’s up, and 0. Or the potential barrier is down (I’m not sure if this would be a stable state for it), and if we perform that measurement we could get either result.”
But then if we lower the barrier, tilt, and raise the barrier again, we’ve put a bit into the grid but it doesn’t seem to me that we’ve moved the previous bit into the environment.
I think the answer might be “we’ve moved a bit into the environment, in the sense that the entropy of the environment must have increased”? But that needs Landauer’s principle to see it, and I take the example as being “here’s an intuitive illustration of Landauer’s principle”, in which case it doesn’t seem to work for that. But perhaps I’m misunderstanding something?
(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer’s principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
Yes, entropy/information is conserved, so you can’t truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.
Landauer’s principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.