No they’re not, they’re describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can’t information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, “These two cognitively distinct states will map to molecularly indistinguishable end states”. I’m not saying you have to use that exact phrasing but it’s what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.
Are you referring to the neuroscientist’s discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:
Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. (...) (information simply isn’t there to be read, regardless of how advanced the reader may be).
In our lingo: the state transformation is a non-injective function (=loss of information).
However, the import of the distance between a “best guess” facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night’s sleep? Before and after a TBI injury (yay pleonasm)?
Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?
Speculatively, I’d rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.
Otherwise, we’d incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.
Heh, pleonasm, since the “I” in the TBI acronym already refers to “injury”, thus rendering the second injury as an overkill. Let’s get side-tracked on that, typical LW style :)
Reading the structure and “downloading it” is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.
Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind.
I don’t think any intelligence can read information that is no longer there. So, no, I don’t think it will help.
The damage that is occurring—distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways. Just changing the exact localization of Ca microdomains within a synapse can wreak havoc, replacing the liquid completely? Not going to work.
Replacing the solvent, however, would do it almost unavoidably (adding the cryoprotectant might not, but removing it during rehydration will). With membrane-bound proteins you also have the issue of asymmetry. Proteins will seem fine in a symmetric membrane, but more and more data shows that many don’t really work properly; there is a reason why cells keep phosphatydilserine and PIPs predominantly on the inner leaflet.
These appear to be saying just what I thought they were saying—current cryonics practice destroys the information - and, given the above, I don’t see sufficient evidence to assume your reading.
These appear to be saying just what I thought they were saying—current cryonics practice destroys the information—and, given the above, I don’t see sufficient evidence to assume your reading.
At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:
Reading the structure and “downloading it” is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.
Irretrievably? I’d be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he’d essentially have to demonstrate that he isn’t thinking like a professional neuroscientist for the purpose of the claim.)
The damage that is occurring—distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways.
(Those sound like a big deal to a neuroscientist in current practice. Whether they are beyond the theoretical capabilities of a superintelligence to recover? I would bet that the comment author really has no good reason to credibly doubt.)
adding the cryoprotectant might not, but removing it during rehydration will
Rehydration? Removing the cryoprotectant? Assume much? (This itself would be enough to conclude that Kalla is giving a Credible, Professional and Authoratative opinion that can not be questioned… on an entirely different question to the one that actually matters for reasoning about cryonics-with-expected-superintelligence.)
Proteins will seem fine in a symmetric membrane, but more and more data shows that many don’t really work properly
Don’t really work properly, huh? (Someone is missing the point again.)
What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about “information destruction” but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.
I’d like to be absolutely clear on the claim that’s being made here.
If I overstate the claim, understate the claim or even state it in a manner that seems unduly silly, please do correct me—my aim here is to ascertain precisely what the claim being made is.
As I understand it, you are claiming that:
current cryonics practice will preserve sufficient information that a future superintelligence (that we do not presently understand enough about to construct or predict the actions of) may, using unspecified future technologies, be able to use the information in the brain preserved using current cryonics practice to reconstruct the personality that was in said brain at the time of its preservation to a sufficient fidelity that it would count to the personality signing up for such preservation as revival;
that having no idea what technologies the superintelligence might use to perform this (presently apparently physically impossible) task and having almost no idea about almost any characteristic of this future superintelligence, beyond a list of things we know we don’t want it to do, doesn’t count as an objection of substance;
and that kalla724 being unable to conclusively disprove this is enough reason to dismiss kalla724′s objections in toto.
Have I left anything out, overstated or understated anything here?
If the above is wildly off base, could you please summarise the actual claim in your own words?
Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler’s highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There’s also a lot of clueless objections along lines of “But they won’t just spring back to life when you warm them up” which don’t bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It’s not a question of “fail to disprove”, it’s a question of what happens if you just extrapolate current knowledge at face value without worrying about whether the conclusion sounds weird. Similarly, you can postulate a social collapse which wipes out the infrastructure for liquid nitrogen production, and a cryonics facility could try to further defend against that scenario by having on-premises cooling powered by solar cells… but if you were actually told the US would collapse in 2028, you would have learned a new fact you did not presently know; it’s not a default assumption.
There’s also a lot of clueless objections along lines of “But they won’t just spring back to life when you warm them up” which don’t bear on the key question one way or another.
This is, of course, not anywhere in anything that kalla724 or I said.
However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative.
Thank you, this is a solid claim that current cryonics practice preserves sufficient information (even if we presently have literally no idea how to get it out).
This is, of course, not anywhere in anything that kalla724 or I said.
If you complain about how it would be hard to in-situ repair denatured proteins—instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it’s physically impossible to tell if the starting protein was in conformation X or conformation Y—then you’re complaining about the difficulty of repairing functional damage, i.e., the brain won’t work after you switch it back on, which is completely missing the point.
If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it’s entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn’t previously know (e.g. that the long-term behavior of this synapse was reflected in a distinguishable effect on the chemical balance of nearby glial cells or something). But currently, if we find out that cryonics doesn’t work, we must have learned some new fact of neuroscience about informationally important brain information not visible in vesicle densities, synaptic configurations, and other things that current neuroscience says are important and that we can see preserved in vitrified rat brains.
We don’t have current tech for getting info out. There’s solid foreseeable routes in both nanoimaging and nanodevices. If the molecules are in-place with sufficient resolution, sufficiently advanced and foreseeable future imaging tech or nanomanipulation tech should be able to get the info out. Like, Nanosystems level would definitely be sufficient though not necessary, and those are some fairly detailed calculations, estimates, and toy systems being bandied about.
The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain
Maybe I’m missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.
You’re missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.
I understood a pretty important element in the cryonics argument is assuming that you stick to things that are feasible given our current understanding of physics, though not necessarily given our current level of technology. Conflating technology and physics here will turn the arguments into hash, so it’s kinda important to keep them separate. It’s generally assumed that the future superintelligences will obey laws of physics that will be pretty much what we understand them to be now, although they may apply them to invent technologies we have no idea about. “Things will have to continue working with the same laws of physics they’re working with now” seems different to me from “any random magical stuff can happen because Singularity”, which you seem to be going for here.
I’m not sure if “just don’t break the laws of physics” is strong enough though. Few people think it very feasible that there would be any way to reconstruct a human body locked in a box and burnt to ash, but go abstract enough with the physics and it’s all just a bunch of particles running on neat and reversible trajectories, and maybe some sort of Laplace’s demon contraption could track enough of them and trace them back far enough to get the human persona information back. (Or does this run into Heisenberg uncertainty?)
The “possible physically but not technologically” seems like a rather tricky type of reasoning. Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don’t have the tech to do either yet. But it seems like the key to this argument, and I rarely see people engaging with it. The counterarguments seem to be mostly about either the technology not being there or philosophical arguments about the continuity of the self.
Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don’t have the tech to do either yet.
“As the Soviet geologists got to know the Lykov family, they realized that they had underestimated their abilities and intelligence. Each family member had a distinct personality; Old Karp was usually delighted by the latest innovations that the scientists brought up from their camp, and though he steadfastly refused to believe that man had set foot on the moon, he adapted swiftly to the idea of satellites. The Lykovs had noticed them as early as the 1950s, when “the stars began to go quickly across the sky,” and Karp himself conceived a theory to explain this: “People have thought something up and are sending out fires that are very like stars.”
Note that what I posit as the apparent argument makes no contentions about continuity of self—let’s assume minds can in fact be copied around like MP3s.
Yes, I’m annoyed when people pull out a hypothetical magic-equivalent superintelligence that will make everything all better as an argument so solid that the burden of proof is to disprove it: “we don’t know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point).” They don’t know how to get there from here, but they’re trying really hard, therefore this hypothetical being should be assumed?
“we don’t know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point).”
I just said we’re assuming we know it can’t break the laws of physics.
We can tell that if you blow up someone with antimatter, putting them back together would have to involve breaking the speed of light unless you start out controlling the entire surrounding light cone before the person was blown up. If the person was vitrified, there isn’t a similar obvious violation of laws of physics involved in putting them back together.
So it seems like cryonics after death gives you a better chance at being eventually reanimated than antimatter burial after death. With regular burial definitely leaning towards the antimatter option, the causal stuff that needs to be traced back to get you together gets spread too wide. Yet people still argue as if cryonics should be treated just the same as regular burial as long as there’s no demonstrable technology that shows it working for humans.
I’m not sure why it’s a dealbreaker to assume that the technology side will advance into something we can’t fully anticipate. Today’s technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it’s still based on the laws of physics a physicists from 1900 would be quite familiar with.
Today’s technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it’s still based on the laws of physics a physicists from 1900 would be quite familiar with.
The GPS depends on relativity. And “barring the quantum mechanical bits” is a hell of an overwhelming exception. (But make that “a physicist from 1930″ and I will agree.)
“Warm ’em up and see if they spring back to life” was a possible revival method that cryonicists already didn’t believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.
No they’re not, they’re describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can’t information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, “These two cognitively distinct states will map to molecularly indistinguishable end states”. I’m not saying you have to use that exact phrasing but it’s what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.
Are you referring to the neuroscientist’s discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:
In our lingo: the state transformation is a non-injective function (=loss of information).
However, the import of the distance between a “best guess” facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night’s sleep? Before and after a TBI injury (yay pleonasm)?
Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?
Speculatively, I’d rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.
Otherwise, we’d incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.
I thought you meant “neoplasm”, then I actually Googled pleonasm and there’s a good chance you mean that. Which is it???
Heh, pleonasm, since the “I” in the TBI acronym already refers to “injury”, thus rendering the second injury as an overkill. Let’s get side-tracked on that, typical LW style :)
Pleonasm, neoplasm … potato, topota.
kalla724 quotes from the thread:
These appear to be saying just what I thought they were saying—current cryonics practice destroys the information - and, given the above, I don’t see sufficient evidence to assume your reading.
At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:
Irretrievably? I’d be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he’d essentially have to demonstrate that he isn’t thinking like a professional neuroscientist for the purpose of the claim.)
(Those sound like a big deal to a neuroscientist in current practice. Whether they are beyond the theoretical capabilities of a superintelligence to recover? I would bet that the comment author really has no good reason to credibly doubt.)
Rehydration? Removing the cryoprotectant? Assume much? (This itself would be enough to conclude that Kalla is giving a Credible, Professional and Authoratative opinion that can not be questioned… on an entirely different question to the one that actually matters for reasoning about cryonics-with-expected-superintelligence.)
Don’t really work properly, huh? (Someone is missing the point again.)
What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about “information destruction” but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.
I’d like to be absolutely clear on the claim that’s being made here.
If I overstate the claim, understate the claim or even state it in a manner that seems unduly silly, please do correct me—my aim here is to ascertain precisely what the claim being made is.
As I understand it, you are claiming that:
current cryonics practice will preserve sufficient information that a future superintelligence (that we do not presently understand enough about to construct or predict the actions of) may, using unspecified future technologies, be able to use the information in the brain preserved using current cryonics practice to reconstruct the personality that was in said brain at the time of its preservation to a sufficient fidelity that it would count to the personality signing up for such preservation as revival;
that having no idea what technologies the superintelligence might use to perform this (presently apparently physically impossible) task and having almost no idea about almost any characteristic of this future superintelligence, beyond a list of things we know we don’t want it to do, doesn’t count as an objection of substance;
and that kalla724 being unable to conclusively disprove this is enough reason to dismiss kalla724′s objections in toto.
Have I left anything out, overstated or understated anything here?
If the above is wildly off base, could you please summarise the actual claim in your own words?
Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler’s highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There’s also a lot of clueless objections along lines of “But they won’t just spring back to life when you warm them up” which don’t bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It’s not a question of “fail to disprove”, it’s a question of what happens if you just extrapolate current knowledge at face value without worrying about whether the conclusion sounds weird. Similarly, you can postulate a social collapse which wipes out the infrastructure for liquid nitrogen production, and a cryonics facility could try to further defend against that scenario by having on-premises cooling powered by solar cells… but if you were actually told the US would collapse in 2028, you would have learned a new fact you did not presently know; it’s not a default assumption.
This is, of course, not anywhere in anything that kalla724 or I said.
Thank you, this is a solid claim that current cryonics practice preserves sufficient information (even if we presently have literally no idea how to get it out).
If you complain about how it would be hard to in-situ repair denatured proteins—instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it’s physically impossible to tell if the starting protein was in conformation X or conformation Y—then you’re complaining about the difficulty of repairing functional damage, i.e., the brain won’t work after you switch it back on, which is completely missing the point.
If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it’s entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn’t previously know (e.g. that the long-term behavior of this synapse was reflected in a distinguishable effect on the chemical balance of nearby glial cells or something). But currently, if we find out that cryonics doesn’t work, we must have learned some new fact of neuroscience about informationally important brain information not visible in vesicle densities, synaptic configurations, and other things that current neuroscience says are important and that we can see preserved in vitrified rat brains.
We don’t have current tech for getting info out. There’s solid foreseeable routes in both nanoimaging and nanodevices. If the molecules are in-place with sufficient resolution, sufficiently advanced and foreseeable future imaging tech or nanomanipulation tech should be able to get the info out. Like, Nanosystems level would definitely be sufficient though not necessary, and those are some fairly detailed calculations, estimates, and toy systems being bandied about.
Maybe I’m missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.
You’re missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.
I understood a pretty important element in the cryonics argument is assuming that you stick to things that are feasible given our current understanding of physics, though not necessarily given our current level of technology. Conflating technology and physics here will turn the arguments into hash, so it’s kinda important to keep them separate. It’s generally assumed that the future superintelligences will obey laws of physics that will be pretty much what we understand them to be now, although they may apply them to invent technologies we have no idea about. “Things will have to continue working with the same laws of physics they’re working with now” seems different to me from “any random magical stuff can happen because Singularity”, which you seem to be going for here.
I’m not sure if “just don’t break the laws of physics” is strong enough though. Few people think it very feasible that there would be any way to reconstruct a human body locked in a box and burnt to ash, but go abstract enough with the physics and it’s all just a bunch of particles running on neat and reversible trajectories, and maybe some sort of Laplace’s demon contraption could track enough of them and trace them back far enough to get the human persona information back. (Or does this run into Heisenberg uncertainty?)
The “possible physically but not technologically” seems like a rather tricky type of reasoning. Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don’t have the tech to do either yet. But it seems like the key to this argument, and I rarely see people engaging with it. The counterarguments seem to be mostly about either the technology not being there or philosophical arguments about the continuity of the self.
H. G. Wells did it: http://en.wikipedia.org/wiki/The_War_in_the_Air http://en.wikipedia.org/wiki/First_Men_In_The_Moon
Also, people can sometimes do it themselves:
http://www.smithsonianmag.com/history-archaeology/For-40-Years-This-Russian-Family-Was-Cut-Off-From-Human-Contact-Unaware-of-World-War-II-188843001.html
Relevant quote:
Note that what I posit as the apparent argument makes no contentions about continuity of self—let’s assume minds can in fact be copied around like MP3s.
Yes, I’m annoyed when people pull out a hypothetical magic-equivalent superintelligence that will make everything all better as an argument so solid that the burden of proof is to disprove it: “we don’t know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point).” They don’t know how to get there from here, but they’re trying really hard, therefore this hypothetical being should be assumed?
I just said we’re assuming we know it can’t break the laws of physics.
We can tell that if you blow up someone with antimatter, putting them back together would have to involve breaking the speed of light unless you start out controlling the entire surrounding light cone before the person was blown up. If the person was vitrified, there isn’t a similar obvious violation of laws of physics involved in putting them back together.
So it seems like cryonics after death gives you a better chance at being eventually reanimated than antimatter burial after death. With regular burial definitely leaning towards the antimatter option, the causal stuff that needs to be traced back to get you together gets spread too wide. Yet people still argue as if cryonics should be treated just the same as regular burial as long as there’s no demonstrable technology that shows it working for humans.
I’m not sure why it’s a dealbreaker to assume that the technology side will advance into something we can’t fully anticipate. Today’s technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it’s still based on the laws of physics a physicists from 1900 would be quite familiar with.
The GPS depends on relativity. And “barring the quantum mechanical bits” is a hell of an overwhelming exception. (But make that “a physicist from 1930″ and I will agree.)
Heavy functional damage still rules out some possible revival methods, so reduces probability of success.
“Warm ’em up and see if they spring back to life” was a possible revival method that cryonicists already didn’t believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.