If you want to securely erase a hard drive, it’s not as easy as writing it over with zeroes. Sure, an “erased” hard drive like this won’t boot up your computer if you just plug it in again. But if the drive falls into the hands of a specialist with a scanning tunneling microscope, they can tell the difference between “this was a 0, overwritten by a 0″ and “this was a 1, overwritten by a 0”.
There are programs advertised to “securely erase” hard drives using many overwrites of 0s, 1s, and random data. But if you want to keep the secret on your hard drive secure against all possible future technologies that might ever be developed, then cover it with thermite and set it on fire. It’s the only way to be sure.
Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person.
This argument amounts to “it might be possible” or “you can’t prove it’s not preserved”. This is true, but it’s not a reason to think “it is probably preserved”.
Eliezer’s hard drive comparison is actually wrong. As I commented on Timeless Identity, Peter Gutmann, who wrote the original list of steps to securely erase a disk, is particularly annoyed that it has taken on the status of a voodoo ritual. “For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, “A good scrubbing with random data will do about as well as can be expected”. This was true in 1996, and is still true now.”
This isn’t directly relevant to the question of memory in a brain—but it wasn’t then either, because it just isn’t a very apposite analogy to use in thinking about this question.
I’ve also commented (not in the original thread, can’t remember where) that the hard drive is a very much cherry picked analogy. Substitute it with DRAM and you get the opposite result: information-theoretic “death” within minutes of power loss at room temperature, a few weeks or months at most at liquid nitrogen temperature.
Of course the human brain is neither a DRAM nor a hard drive. Rather than arguing from analogies I think it’s better to listen to actual domain experts: neurobiologists and cryobiologists.
Yep. I put up this hypothetical before: Drop an iPhone into liquid nitrogen, slice it up very thin. Now recover the icons for the first three entries in the address book.
At least in this case we would expect it to be possible for someone with enough money and time, with today’s technology. You should be able to recover the contents of the hard drive.
The domain-expert (Gutmann) says otherwise. At this stage, it’d really take an example of data recovery in practice, not just in “you can’t prove I’m wrong!” hypothetical.
(I’m assuming you don’t have an example to hand of having recovered data yourself in this manner.)
I read Gutmann as talking about what you should expect, security for the real world. I don’t see where they talk about someone willing to put in an unrealistically huge amount of effort. But maybe I missed that? Could you point me that way?
It is true that I can’t philosophically prove that arbitrary hypothetical technology that would achieve something currently nigh-equivalent to magic cannot possibly exist, nor can I philosophically prove the data isn’t there any more, yes. I can say there is no evidence for either, and expertise and evidence against both, and that “but you can’t prove it isn’t true!” isn’t a very good argument.
“For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do … A good scrubbing with random data will do about as well as can be expected”
But what does that mean? Can someone with an STM and lots of patience still get the data back? Or is it just “gone for our purposes, with today’s technology”?
What you need to realize is that for 2 states to be distinguishable ever in principle, the states must be separated by an energy barrier taller than thermal fluctuations. Else the thermal noise is going to overwrite it randomly a zillion times a second.
The other issue is that the closer are the states the less metabolic energy you’ll need to switch between them. Which makes something like neurons (evolved over a very long time) settle on an optimum where there’s no room for some weak residuals recoverable with some future technology that got more sensitive probes.
I.e. if the cryo-protectants happen to reset some bits, that information is gone. You have to hope that cryo-protectants do not actually reset anything, i.e. that nothing is forced from multiple states to one state.
edit: another issue. Individual ion channels, gap junctions, etc etc. combine more-or-less additively into final electrical properties of the neuron. When you need to know the value of a sum a+b+c+d+e , losing even a single variable of the sum introduces massive uncertainty in the result. It would’ve been a lot easier if those properties mirrored each other, like a=b=c=d=e , then we’d only need to preserve at least one, but as they combine additively, we need to not lose a single one.
As I note below, if you really want to hold on to this particular example for analogical purposes, it’s at a stage where “you can’t prove it’s false!” isn’t really adequate and you’d need to produce an example of recovering data in practice, not just hypothetically.
The data is potentially recoverable from hard drive wiped with zeroes solely because the underlying medium is capable of storing more information—the 0 overwritten by 0″, 0 overwritten by 1, 1 overwritten by 0, and 1 overwritten by 1 are, potentially, distinct states of the platter.
Synapses, on the other hand, store synaptic strengths on molecular scale, where if the state information is lost, it is completely gone, as the molecule which can be either in the state A or state B does not have state of “B but it used to be A” or “B and it used to be B”.
edit: bottom line is, the energy barrier between different values has to be considerably larger than kT for the data to persist at all . The synapses already store the data at an energy level low enough that if you go much lower—for the analogue of residual magnetization on a drive which is much less energetic than original magnetization—you’re in the region where the data simply can’t be stored.
No, the argument is “it might be difficult to recover[1], but it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”. Which is true, and which logically implies “it’s probably preserved”.
[1] relative to an unclear standard, but presumably based in terms of computation need to recover brainstate as a multiple of whatever the fastest process which could recover a brainstate based on a perfect instantaneous view. (ie. a copy of every particle’s precise state as the brain fell asleep)
Why do you think “it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”? Where did you learn it? Did the “secure deletion” argument convince you of it or is it something you believed before?
The secure deletion argument convinced me, yes. It’s a compelling analogy, in that it points out how difficult it is to actually destroy information, even in a minimally-redundant medium when you’re specifically trying to destroy that information. A process in a brain, which is a highly-redundant medium, when specifically trying not to destroy data, is incredibly unlikely to make information unrecoverable.
Hard drives aren’t minimally redundant. The size of the magnetic regions on the platter is bounded from below by the requirement that the heads must be able to read and write them while passing over them at a very high speed. Furthermore, hard drives are a very stable medium: they are designed to reliably retain data for years or decades without power (possibly they may retain data even for centuries if the storage conditions are right).
I think it’s a bad analogy, and a cherry picked one. Contrast with how easy it is to delete data from a DRAM chip, and you’ll get why analogies with modern computer hardware don’t really make any sense for biological brains.
Except that the analogy is wrong: it’s quite easy to destroy the information. In practice, writing random data to a modern hard disk leaves it unrecoverable.
So:
Why did the idea that hard drives (a completely different form of data storage, something specifically designed to retain data across a wide range of conditions) are hard to erase make you think that it was hard to erase data from brains?
Now that you know that it isn’t hard to irrecoverably erase hard drives (even though they’re designed to retain data), how does that affect the analogy with brains? Why?
The weight of subject-expert opinion appears to be that it’s not recoverable unless and until you can show it is, in fact, recoverable. If you’re asserting otherwise, the first thing you’d need would be a counterexample.
I note also you’re supporting an expert opinion that agrees with you and denying an expert opinion that disagrees with you … when they’re linked opinions from the same expert.
No, I’m pointing out that the ‘expert opinion that disagrees with me’ doesn’t, in fact, disagree with me. The quote you yourself provided does not support your position.
Eliezer’s secure deletion discussion:
This argument amounts to “it might be possible” or “you can’t prove it’s not preserved”. This is true, but it’s not a reason to think “it is probably preserved”.
Eliezer’s hard drive comparison is actually wrong. As I commented on Timeless Identity, Peter Gutmann, who wrote the original list of steps to securely erase a disk, is particularly annoyed that it has taken on the status of a voodoo ritual. “For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, “A good scrubbing with random data will do about as well as can be expected”. This was true in 1996, and is still true now.”
This isn’t directly relevant to the question of memory in a brain—but it wasn’t then either, because it just isn’t a very apposite analogy to use in thinking about this question.
I’ve also commented (not in the original thread, can’t remember where) that the hard drive is a very much cherry picked analogy. Substitute it with DRAM and you get the opposite result: information-theoretic “death” within minutes of power loss at room temperature, a few weeks or months at most at liquid nitrogen temperature.
Of course the human brain is neither a DRAM nor a hard drive. Rather than arguing from analogies I think it’s better to listen to actual domain experts: neurobiologists and cryobiologists.
Yep. I put up this hypothetical before: Drop an iPhone into liquid nitrogen, slice it up very thin. Now recover the icons for the first three entries in the address book.
At least in this case we would expect it to be possible for someone with enough money and time, with today’s technology. You should be able to recover the contents of the hard drive.
The domain-expert (Gutmann) says otherwise. At this stage, it’d really take an example of data recovery in practice, not just in “you can’t prove I’m wrong!” hypothetical.
(I’m assuming you don’t have an example to hand of having recovered data yourself in this manner.)
I read Gutmann as talking about what you should expect, security for the real world. I don’t see where they talk about someone willing to put in an unrealistically huge amount of effort. But maybe I missed that? Could you point me that way?
It is true that I can’t philosophically prove that arbitrary hypothetical technology that would achieve something currently nigh-equivalent to magic cannot possibly exist, nor can I philosophically prove the data isn’t there any more, yes. I can say there is no evidence for either, and expertise and evidence against both, and that “but you can’t prove it isn’t true!” isn’t a very good argument.
“For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do … A good scrubbing with random data will do about as well as can be expected”
But what does that mean? Can someone with an STM and lots of patience still get the data back? Or is it just “gone for our purposes, with today’s technology”?
What you need to realize is that for 2 states to be distinguishable ever in principle, the states must be separated by an energy barrier taller than thermal fluctuations. Else the thermal noise is going to overwrite it randomly a zillion times a second.
The other issue is that the closer are the states the less metabolic energy you’ll need to switch between them. Which makes something like neurons (evolved over a very long time) settle on an optimum where there’s no room for some weak residuals recoverable with some future technology that got more sensitive probes.
I.e. if the cryo-protectants happen to reset some bits, that information is gone. You have to hope that cryo-protectants do not actually reset anything, i.e. that nothing is forced from multiple states to one state.
edit: another issue. Individual ion channels, gap junctions, etc etc. combine more-or-less additively into final electrical properties of the neuron. When you need to know the value of a sum a+b+c+d+e , losing even a single variable of the sum introduces massive uncertainty in the result. It would’ve been a lot easier if those properties mirrored each other, like a=b=c=d=e , then we’d only need to preserve at least one, but as they combine additively, we need to not lose a single one.
As I note below, if you really want to hold on to this particular example for analogical purposes, it’s at a stage where “you can’t prove it’s false!” isn’t really adequate and you’d need to produce an example of recovering data in practice, not just hypothetically.
The data is potentially recoverable from hard drive wiped with zeroes solely because the underlying medium is capable of storing more information—the 0 overwritten by 0″, 0 overwritten by 1, 1 overwritten by 0, and 1 overwritten by 1 are, potentially, distinct states of the platter.
Synapses, on the other hand, store synaptic strengths on molecular scale, where if the state information is lost, it is completely gone, as the molecule which can be either in the state A or state B does not have state of “B but it used to be A” or “B and it used to be B”.
edit: bottom line is, the energy barrier between different values has to be considerably larger than kT for the data to persist at all . The synapses already store the data at an energy level low enough that if you go much lower—for the analogue of residual magnetization on a drive which is much less energetic than original magnetization—you’re in the region where the data simply can’t be stored.
No, the argument is “it might be difficult to recover[1], but it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”. Which is true, and which logically implies “it’s probably preserved”.
[1] relative to an unclear standard, but presumably based in terms of computation need to recover brainstate as a multiple of whatever the fastest process which could recover a brainstate based on a perfect instantaneous view. (ie. a copy of every particle’s precise state as the brain fell asleep)
Why do you think “it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”? Where did you learn it? Did the “secure deletion” argument convince you of it or is it something you believed before?
The secure deletion argument convinced me, yes. It’s a compelling analogy, in that it points out how difficult it is to actually destroy information, even in a minimally-redundant medium when you’re specifically trying to destroy that information. A process in a brain, which is a highly-redundant medium, when specifically trying not to destroy data, is incredibly unlikely to make information unrecoverable.
Hard drives aren’t minimally redundant. The size of the magnetic regions on the platter is bounded from below by the requirement that the heads must be able to read and write them while passing over them at a very high speed.
Furthermore, hard drives are a very stable medium: they are designed to reliably retain data for years or decades without power (possibly they may retain data even for centuries if the storage conditions are right).
I think it’s a bad analogy, and a cherry picked one. Contrast with how easy it is to delete data from a DRAM chip, and you’ll get why analogies with modern computer hardware don’t really make any sense for biological brains.
Except that the analogy is wrong: it’s quite easy to destroy the information. In practice, writing random data to a modern hard disk leaves it unrecoverable.
So:
Why did the idea that hard drives (a completely different form of data storage, something specifically designed to retain data across a wide range of conditions) are hard to erase make you think that it was hard to erase data from brains?
Now that you know that it isn’t hard to irrecoverably erase hard drives (even though they’re designed to retain data), how does that affect the analogy with brains? Why?
The analogy is not wrong. As you quote, it takes multiple passes deliberately trying to destroy the information to remove it.
Or an external degaussing magnetic field. Or heat. These methods make the hard drive unusable, but they reliably destroy information.
The weight of subject-expert opinion appears to be that it’s not recoverable unless and until you can show it is, in fact, recoverable. If you’re asserting otherwise, the first thing you’d need would be a counterexample.
I note also you’re supporting an expert opinion that agrees with you and denying an expert opinion that disagrees with you … when they’re linked opinions from the same expert.
No, I’m pointing out that the ‘expert opinion that disagrees with me’ doesn’t, in fact, disagree with me. The quote you yourself provided does not support your position.