Has this been discussed before, and/or is there some reason that it doesn’t work or isn’t relevant?
I go over some of the issues here. One of the points as a taster:
However, in order to attain the supposed benefits of reversible computation, the reversible machine must actually be run backwards to attain its original state. If this step is not taken, then typically the machine becomes clogged up with digital heat i.e. entropy, and is thus rendered unable to perform further useful work.
I am talking about very long term applications, which (it seems?) you aren’t trying to address.
For example: yes, to run a reversible computer without waste heat you need to actually uncompute intermediate results. This introduces time overhead which is generally unacceptable for real applications in the modern world, where negentropy is abundant. But what does this have to do with the long term capability of the universe for computation?
Reversible computing is good news for power consumption and heat dissipation (including in the long term) - but not great news—bacause of the reasons I go over in my article.
If you think that actually running things backwards is actually an attractive answer, perhaps think some more about how you are going to correct all errors and prevent an error catastrophe upon reversal—and about how you are going to propagate the “reverse now” signal in a reversible manner.
perhaps think some more about how you are going to correct all errors and prevent an error catastrophe upon reversal
The normal way. This generates waste heat, but at a rate which depends on the error rate of your components. Under our current understanding of physics, this can be driven essentially to zero in the long run. Even if it can’t, it can at least be driven down until we encounter some as-yet-unknown fundamental physical limitation. If we imagine people living in a reversible CA, or any other laws of physics which we can understand, then we can see how they could build an error-free computer once they had a theory of everything. Do you suspect our universe is more complicated, so that such an understanding is impossible? What do you think determines the quantitative bound on achievable error rate?
and about how you are going to propagate the “reverse now” signal in a reversible manner.
I don’t understand this objection. I can write down reversible circuits which coordinate with only a constant factor penalty (it is trivial if I don’t care about the constant—just CNOT in a ‘reverse’ bit to each gate from a central controller, and then make each gate perform its operation in reverse if the bit is set, tripling the number of gates). What fundamental non-idealness of reality are you appealing to here, that would prevent a straightforward solution?
perhaps think some more about how you are going to correct all errors and prevent an error catastrophe upon reversal
The normal way. This generates waste heat, but at a rate which depends on the error rate of your components. Under our current understanding of physics, this can be driven essentially to zero in the long run.
It seems to me that with hardware error correction, you have to pay for your error rate with heat. If you want to recover your machine by running it backwards you need a very low hardware error rate—which is correspondingly expensive in terms of heat dissipation.
If we imagine people living in a reversible CA, or any other laws of physics which we can understand, then we can see how they could build an error-free computer once they had a theory of everything.
I am not clear how having a TOE will help with cosmic rays and thermal noise. Of course you can deal with thermal noise using a fridge—but then your fridge needs a power supply...
and about how you are going to propagate the “reverse now” signal in a reversible manner.
I don’t understand this objection. I can write down reversible circuits which coordinate with only a constant factor penalty (it is trivial if I don’t care about the constant—just CNOT in a ‘reverse’ bit to each gate from a central controller, and then make each gate perform its operation in reverse if the bit is set, tripling the number of gates).
Propagating a “reverse” signal through a large system and getting the components to synchronously reverse course is not exactly trivial. It’s also hard to do reversibly, since the “reverse” signal itself tends to dissipate at the edges of the system. - though as you say, that’s a one-off cost.
You proposed “tripling of the number of gates”. Plus the machine has twice the runtime because of the “running backwards” business. Reversibility has some costs, it seems...
I am pretty sceptical about the idea that the future will see reversible computers that people will bother to run backwards. Instead, I expect that we will see more of the strategies mentioned in my article.
I think if you realistically want to run a reversible computer, you need to uncompute regardless, because otherwise your space is non-reusable, becoming only as useful as time.
This sounds right to me. Reversible computing means you can get closer to the energy limit set by Landauer’s principle, but you still don’t drive the negentropy cost per bit to zero.
You get the energy limit set by Landauer’s principle without reversible computing. Reversible computing completely circumvents Landauer’s principle (although there may be other limitations).
I don’t follow. You still have to pay for erasing and changing your bits, regardless of whether you use reversible computing and do the erasure at the end, or whether you do it during the computation as in irreversible computing.
You generally uncompute intermediate results in reversible computation, rather than erasing them: if you produced some garbage by starting from a low entropy state and running the computation C forward, you can get rid of the garbage by just running C backwards (perhaps first copying whatever output you care about, so that it doesn’t get destroyed).
perhaps first copying whatever output you care about, so that it doesn’t get destroyed
Well, yeah. You’re going to use up negentropy for that—where are you copying to? Reversible computing just means you spend less negentropy. (Feel like I’ve said this before.)
Yes, you just produce less entropy. But you produce a lot less entropy and it is completely unrelated to Landauer’s principle.
Suppose I want to calculate a 1TB document created by a googol person civilization running for a googol googol years. I only have to produce a TB of entropy, not more than a googol googol googol bits (as I would have to if I used irreversible computing naively).
Very nice! So you don’t just cheaply compute-and-uncompute a lot of independent worlds, you can allow them to leave an arbitrarily-difficult-to-produce trace on the future worlds. Given how much entropy we really have, sufficiently small persons for example can be spared from uncomputation.
In particular, a person can live in an incrementally computed-and-uncomputed virtual world that is being regularly reversed to its initial state, with the effect that only the person consumes entropy, and the whole arbitrarily complicated world has zero entropic footprint. The world could also be optimized game-save-style over person-time, starting from an initial state, but going forward, so that some version of the person does all the updating, and so carries the excess entropy. Alternatively, improvements to the world could be carried out by discarded copies. Think of software downloaded from the distant future…
Or more generally, this is just time travel, where you can transport sufficiently small things (and people) to the past (or between timelines) and change things the next time over. You travel forwards in time by computing, backwards by uncomputing, you can take some luggage with you, you can climb up a different timeline by observing without interference, and you can intervene and change a timeline any time you want. The network of people navigating baseline network of virtual worlds builds up a second level (implementing meta-time), which itself can be navigated and intervened-in by other observers, and so forth. Sounds like a Greg Egan novel (that wasn’t written yet).
I go over some of the issues here. One of the points as a taster:
I am talking about very long term applications, which (it seems?) you aren’t trying to address.
For example: yes, to run a reversible computer without waste heat you need to actually uncompute intermediate results. This introduces time overhead which is generally unacceptable for real applications in the modern world, where negentropy is abundant. But what does this have to do with the long term capability of the universe for computation?
Reversible computing is good news for power consumption and heat dissipation (including in the long term) - but not great news—bacause of the reasons I go over in my article.
If you think that actually running things backwards is actually an attractive answer, perhaps think some more about how you are going to correct all errors and prevent an error catastrophe upon reversal—and about how you are going to propagate the “reverse now” signal in a reversible manner.
The normal way. This generates waste heat, but at a rate which depends on the error rate of your components. Under our current understanding of physics, this can be driven essentially to zero in the long run. Even if it can’t, it can at least be driven down until we encounter some as-yet-unknown fundamental physical limitation. If we imagine people living in a reversible CA, or any other laws of physics which we can understand, then we can see how they could build an error-free computer once they had a theory of everything. Do you suspect our universe is more complicated, so that such an understanding is impossible? What do you think determines the quantitative bound on achievable error rate?
I don’t understand this objection. I can write down reversible circuits which coordinate with only a constant factor penalty (it is trivial if I don’t care about the constant—just CNOT in a ‘reverse’ bit to each gate from a central controller, and then make each gate perform its operation in reverse if the bit is set, tripling the number of gates). What fundamental non-idealness of reality are you appealing to here, that would prevent a straightforward solution?
It seems to me that with hardware error correction, you have to pay for your error rate with heat. If you want to recover your machine by running it backwards you need a very low hardware error rate—which is correspondingly expensive in terms of heat dissipation.
I am not clear how having a TOE will help with cosmic rays and thermal noise. Of course you can deal with thermal noise using a fridge—but then your fridge needs a power supply...
Propagating a “reverse” signal through a large system and getting the components to synchronously reverse course is not exactly trivial. It’s also hard to do reversibly, since the “reverse” signal itself tends to dissipate at the edges of the system. - though as you say, that’s a one-off cost.
You proposed “tripling of the number of gates”. Plus the machine has twice the runtime because of the “running backwards” business. Reversibility has some costs, it seems...
I am pretty sceptical about the idea that the future will see reversible computers that people will bother to run backwards. Instead, I expect that we will see more of the strategies mentioned in my article.
I think if you realistically want to run a reversible computer, you need to uncompute regardless, because otherwise your space is non-reusable, becoming only as useful as time.
This sounds right to me. Reversible computing means you can get closer to the energy limit set by Landauer’s principle, but you still don’t drive the negentropy cost per bit to zero.
You get the energy limit set by Landauer’s principle without reversible computing. Reversible computing completely circumvents Landauer’s principle (although there may be other limitations).
I don’t follow. You still have to pay for erasing and changing your bits, regardless of whether you use reversible computing and do the erasure at the end, or whether you do it during the computation as in irreversible computing.
You generally uncompute intermediate results in reversible computation, rather than erasing them: if you produced some garbage by starting from a low entropy state and running the computation C forward, you can get rid of the garbage by just running C backwards (perhaps first copying whatever output you care about, so that it doesn’t get destroyed).
Well, yeah. You’re going to use up negentropy for that—where are you copying to? Reversible computing just means you spend less negentropy. (Feel like I’ve said this before.)
Yes, you just produce less entropy. But you produce a lot less entropy and it is completely unrelated to Landauer’s principle.
Suppose I want to calculate a 1TB document created by a googol person civilization running for a googol googol years. I only have to produce a TB of entropy, not more than a googol googol googol bits (as I would have to if I used irreversible computing naively).
Very nice! So you don’t just cheaply compute-and-uncompute a lot of independent worlds, you can allow them to leave an arbitrarily-difficult-to-produce trace on the future worlds. Given how much entropy we really have, sufficiently small persons for example can be spared from uncomputation.
In particular, a person can live in an incrementally computed-and-uncomputed virtual world that is being regularly reversed to its initial state, with the effect that only the person consumes entropy, and the whole arbitrarily complicated world has zero entropic footprint. The world could also be optimized game-save-style over person-time, starting from an initial state, but going forward, so that some version of the person does all the updating, and so carries the excess entropy. Alternatively, improvements to the world could be carried out by discarded copies. Think of software downloaded from the distant future…
Or more generally, this is just time travel, where you can transport sufficiently small things (and people) to the past (or between timelines) and change things the next time over. You travel forwards in time by computing, backwards by uncomputing, you can take some luggage with you, you can climb up a different timeline by observing without interference, and you can intervene and change a timeline any time you want. The network of people navigating baseline network of virtual worlds builds up a second level (implementing meta-time), which itself can be navigated and intervened-in by other observers, and so forth. Sounds like a Greg Egan novel (that wasn’t written yet).