Hi, not to spend a lot of time here, but someone called my attention to the fact that I was mentioned in the comments. Just a few things:
In the comment of mine where I was quoted, I was talking about conventional architectures wherein one erases bits by simply discarding the signal energy, and where the signals themselves have enough associated energy (e.g. an energy difference between 0 and 1 states, or an energy barrier between the states) to be reliably distinguished despite thermal noise. Yes, theoretically one can do somewhat better than this (i.e., closer to the kT ln 2 minimum) with more complicated erasure protocols, but these do generally come at a cost in terms of time. Also, most treatments of these protocols ignore the energy requirements for operating the control mechanisms, which depending on their nature can themselves be substantial. For realistic engineering purposes, one should really analyze a very concrete exemplar mechanism in much more detail; otherwise there are always valid questions that can be raised about whether the analysis that one has done is actually realistic.
That being said, regarding the details and analysis and optimization of alternative erasure protocols, there is a lot of existing published and preprint literature on this topic, some of it quite recent, so I would encourage anyone interested in this topic to begin by spending some time surveying what’s already been done on this before spending a ton of time reinventing the wheel. Start by spending a few minutes doing some relevant keyword searches on Google Scholar. Following citations and backward cites, etc. Nowadays you can use AIs to help you quickly absorb the gist of papers you find, so “doing your homework” in terms of background research is easier than ever.
All this aside, from my POV, optimizing bit erasure is less interesting than reversible computing, since RC can in theory do even better by avoiding erasure or greatly reducing the number of bit erasures that are even needed. Of course, reversible computing has its own overheads, and my above comments about needing to analyze concrete mechanisms in detail in order to be more relevant for engineering purposes also apply to it. Lots of work still needs to be done to prove any of these ideas to be truly practical. I’d certainly encourage anyone who’s interested to get involved, since we’re never going to see these things happen until a lot more people seriously start working on it. And sadly AI is still a long way from being able to do serious engineering innovation all on its own — but I think humans who understand how to engage AI effectively on challenging problems could make great strides.
This post is definitely towards the theoretical side of the theory-engineering spectrum. So the question it tries to answer is much more “does the fact that the laws of physics are reversible rule this out?” rather than “can we actually build something that does this?”. It’s sounding to me like lowish cost reliable erasure is not in principle ruled out by reversible physics (eg. this paper gives kTlog2+c/t where t is the time required and c is some constant), and so the engineering questions are what will be decisive. Hopefully reversible computing will be luckier than nuclear fusion power in terms of the engineering obstacles it runs into.
I don’t have too much else to add right now, I’ll have to take your advice of looking through the literature and seeing what’s out there.
Hi, not to spend a lot of time here, but someone called my attention to the fact that I was mentioned in the comments. Just a few things:
In the comment of mine where I was quoted, I was talking about conventional architectures wherein one erases bits by simply discarding the signal energy, and where the signals themselves have enough associated energy (e.g. an energy difference between 0 and 1 states, or an energy barrier between the states) to be reliably distinguished despite thermal noise. Yes, theoretically one can do somewhat better than this (i.e., closer to the kT ln 2 minimum) with more complicated erasure protocols, but these do generally come at a cost in terms of time. Also, most treatments of these protocols ignore the energy requirements for operating the control mechanisms, which depending on their nature can themselves be substantial. For realistic engineering purposes, one should really analyze a very concrete exemplar mechanism in much more detail; otherwise there are always valid questions that can be raised about whether the analysis that one has done is actually realistic.
That being said, regarding the details and analysis and optimization of alternative erasure protocols, there is a lot of existing published and preprint literature on this topic, some of it quite recent, so I would encourage anyone interested in this topic to begin by spending some time surveying what’s already been done on this before spending a ton of time reinventing the wheel. Start by spending a few minutes doing some relevant keyword searches on Google Scholar. Following citations and backward cites, etc. Nowadays you can use AIs to help you quickly absorb the gist of papers you find, so “doing your homework” in terms of background research is easier than ever.
All this aside, from my POV, optimizing bit erasure is less interesting than reversible computing, since RC can in theory do even better by avoiding erasure or greatly reducing the number of bit erasures that are even needed. Of course, reversible computing has its own overheads, and my above comments about needing to analyze concrete mechanisms in detail in order to be more relevant for engineering purposes also apply to it. Lots of work still needs to be done to prove any of these ideas to be truly practical. I’d certainly encourage anyone who’s interested to get involved, since we’re never going to see these things happen until a lot more people seriously start working on it. And sadly AI is still a long way from being able to do serious engineering innovation all on its own — but I think humans who understand how to engage AI effectively on challenging problems could make great strides.
Cheers… ~Mike Frank
Hi Dr. Frank, thanks for weighing in.
This post is definitely towards the theoretical side of the theory-engineering spectrum. So the question it tries to answer is much more “does the fact that the laws of physics are reversible rule this out?” rather than “can we actually build something that does this?”. It’s sounding to me like lowish cost reliable erasure is not in principle ruled out by reversible physics (eg. this paper gives kTlog2+c/t where t is the time required and c is some constant), and so the engineering questions are what will be decisive. Hopefully reversible computing will be luckier than nuclear fusion power in terms of the engineering obstacles it runs into.
I don’t have too much else to add right now, I’ll have to take your advice of looking through the literature and seeing what’s out there.