No, the argument is “it might be difficult to recover[1], but it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”. Which is true, and which logically implies “it’s probably preserved”.
[1] relative to an unclear standard, but presumably based in terms of computation need to recover brainstate as a multiple of whatever the fastest process which could recover a brainstate based on a perfect instantaneous view. (ie. a copy of every particle’s precise state as the brain fell asleep)
Why do you think “it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”? Where did you learn it? Did the “secure deletion” argument convince you of it or is it something you believed before?
The secure deletion argument convinced me, yes. It’s a compelling analogy, in that it points out how difficult it is to actually destroy information, even in a minimally-redundant medium when you’re specifically trying to destroy that information. A process in a brain, which is a highly-redundant medium, when specifically trying not to destroy data, is incredibly unlikely to make information unrecoverable.
Hard drives aren’t minimally redundant. The size of the magnetic regions on the platter is bounded from below by the requirement that the heads must be able to read and write them while passing over them at a very high speed. Furthermore, hard drives are a very stable medium: they are designed to reliably retain data for years or decades without power (possibly they may retain data even for centuries if the storage conditions are right).
I think it’s a bad analogy, and a cherry picked one. Contrast with how easy it is to delete data from a DRAM chip, and you’ll get why analogies with modern computer hardware don’t really make any sense for biological brains.
Except that the analogy is wrong: it’s quite easy to destroy the information. In practice, writing random data to a modern hard disk leaves it unrecoverable.
So:
Why did the idea that hard drives (a completely different form of data storage, something specifically designed to retain data across a wide range of conditions) are hard to erase make you think that it was hard to erase data from brains?
Now that you know that it isn’t hard to irrecoverably erase hard drives (even though they’re designed to retain data), how does that affect the analogy with brains? Why?
The weight of subject-expert opinion appears to be that it’s not recoverable unless and until you can show it is, in fact, recoverable. If you’re asserting otherwise, the first thing you’d need would be a counterexample.
I note also you’re supporting an expert opinion that agrees with you and denying an expert opinion that disagrees with you … when they’re linked opinions from the same expert.
No, I’m pointing out that the ‘expert opinion that disagrees with me’ doesn’t, in fact, disagree with me. The quote you yourself provided does not support your position.
No, the argument is “it might be difficult to recover[1], but it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”. Which is true, and which logically implies “it’s probably preserved”.
[1] relative to an unclear standard, but presumably based in terms of computation need to recover brainstate as a multiple of whatever the fastest process which could recover a brainstate based on a perfect instantaneous view. (ie. a copy of every particle’s precise state as the brain fell asleep)
Why do you think “it is incredibly unlikely that it’s destroying enough information to make it actually impossible to recover”? Where did you learn it? Did the “secure deletion” argument convince you of it or is it something you believed before?
The secure deletion argument convinced me, yes. It’s a compelling analogy, in that it points out how difficult it is to actually destroy information, even in a minimally-redundant medium when you’re specifically trying to destroy that information. A process in a brain, which is a highly-redundant medium, when specifically trying not to destroy data, is incredibly unlikely to make information unrecoverable.
Hard drives aren’t minimally redundant. The size of the magnetic regions on the platter is bounded from below by the requirement that the heads must be able to read and write them while passing over them at a very high speed.
Furthermore, hard drives are a very stable medium: they are designed to reliably retain data for years or decades without power (possibly they may retain data even for centuries if the storage conditions are right).
I think it’s a bad analogy, and a cherry picked one. Contrast with how easy it is to delete data from a DRAM chip, and you’ll get why analogies with modern computer hardware don’t really make any sense for biological brains.
Except that the analogy is wrong: it’s quite easy to destroy the information. In practice, writing random data to a modern hard disk leaves it unrecoverable.
So:
Why did the idea that hard drives (a completely different form of data storage, something specifically designed to retain data across a wide range of conditions) are hard to erase make you think that it was hard to erase data from brains?
Now that you know that it isn’t hard to irrecoverably erase hard drives (even though they’re designed to retain data), how does that affect the analogy with brains? Why?
The analogy is not wrong. As you quote, it takes multiple passes deliberately trying to destroy the information to remove it.
Or an external degaussing magnetic field. Or heat. These methods make the hard drive unusable, but they reliably destroy information.
The weight of subject-expert opinion appears to be that it’s not recoverable unless and until you can show it is, in fact, recoverable. If you’re asserting otherwise, the first thing you’d need would be a counterexample.
I note also you’re supporting an expert opinion that agrees with you and denying an expert opinion that disagrees with you … when they’re linked opinions from the same expert.
No, I’m pointing out that the ‘expert opinion that disagrees with me’ doesn’t, in fact, disagree with me. The quote you yourself provided does not support your position.