To me this just looks like a bias-manipulating “unpacking” trick—as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up. I could equally make cryonics success sound almost certain by lumping all the failure categories together into one or two big things to be probability-assigned, and unpacking all the disjunctive paths to success into finer and finer subcategories. Which I don’t do, because I don’t lie.
Also, yon neuroscientist does not understand the information-theoretic criterion of death.
There’s another effect of “unpacking”, which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.
I think it’s also good to mention that this kind of questionnaire does not account for possible future advancements which are not included due to lack of availability. The same though applies for further negative changes in the future, but when looking at that list for an example items follows are completely missing:
Legislation for improving the safety and conditions of cryopreserved people is passed
Neuroscientists develop new general techniques for restoring function in patients with braindamage
Breakthrough in nanotechnology allows better analysis and faster repair of damaged neurons
Supercomputers can be used to retrace the original condition of modified or damaged brain
Supercomputers (with the help of FAI?) can be used to reconstruct missing data from redundancy(like mentioned above in Benja’s comment )
etc..
..That is to say it’s one thing to ‘unpack’ a proposition and another to do it accurately or at least I would think a questionnaire with uncertain positive and negative future events would seem less biased.
I think it’s also worthwhile to consider the possibility that this unpacking business is an sort of an inverse of conjunction fallacy—although it’s not exactly the same thing, but I think it’s a very closely related topic?
No they’re not, they’re describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can’t information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, “These two cognitively distinct states will map to molecularly indistinguishable end states”. I’m not saying you have to use that exact phrasing but it’s what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.
Are you referring to the neuroscientist’s discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:
Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. (...) (information simply isn’t there to be read, regardless of how advanced the reader may be).
In our lingo: the state transformation is a non-injective function (=loss of information).
However, the import of the distance between a “best guess” facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night’s sleep? Before and after a TBI injury (yay pleonasm)?
Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?
Speculatively, I’d rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.
Otherwise, we’d incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.
Heh, pleonasm, since the “I” in the TBI acronym already refers to “injury”, thus rendering the second injury as an overkill. Let’s get side-tracked on that, typical LW style :)
Reading the structure and “downloading it” is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.
Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind.
I don’t think any intelligence can read information that is no longer there. So, no, I don’t think it will help.
The damage that is occurring—distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways. Just changing the exact localization of Ca microdomains within a synapse can wreak havoc, replacing the liquid completely? Not going to work.
Replacing the solvent, however, would do it almost unavoidably (adding the cryoprotectant might not, but removing it during rehydration will). With membrane-bound proteins you also have the issue of asymmetry. Proteins will seem fine in a symmetric membrane, but more and more data shows that many don’t really work properly; there is a reason why cells keep phosphatydilserine and PIPs predominantly on the inner leaflet.
These appear to be saying just what I thought they were saying—current cryonics practice destroys the information - and, given the above, I don’t see sufficient evidence to assume your reading.
These appear to be saying just what I thought they were saying—current cryonics practice destroys the information—and, given the above, I don’t see sufficient evidence to assume your reading.
At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:
Reading the structure and “downloading it” is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.
Irretrievably? I’d be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he’d essentially have to demonstrate that he isn’t thinking like a professional neuroscientist for the purpose of the claim.)
The damage that is occurring—distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways.
(Those sound like a big deal to a neuroscientist in current practice. Whether they are beyond the theoretical capabilities of a superintelligence to recover? I would bet that the comment author really has no good reason to credibly doubt.)
adding the cryoprotectant might not, but removing it during rehydration will
Rehydration? Removing the cryoprotectant? Assume much? (This itself would be enough to conclude that Kalla is giving a Credible, Professional and Authoratative opinion that can not be questioned… on an entirely different question to the one that actually matters for reasoning about cryonics-with-expected-superintelligence.)
Proteins will seem fine in a symmetric membrane, but more and more data shows that many don’t really work properly
Don’t really work properly, huh? (Someone is missing the point again.)
What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about “information destruction” but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.
I’d like to be absolutely clear on the claim that’s being made here.
If I overstate the claim, understate the claim or even state it in a manner that seems unduly silly, please do correct me—my aim here is to ascertain precisely what the claim being made is.
As I understand it, you are claiming that:
current cryonics practice will preserve sufficient information that a future superintelligence (that we do not presently understand enough about to construct or predict the actions of) may, using unspecified future technologies, be able to use the information in the brain preserved using current cryonics practice to reconstruct the personality that was in said brain at the time of its preservation to a sufficient fidelity that it would count to the personality signing up for such preservation as revival;
that having no idea what technologies the superintelligence might use to perform this (presently apparently physically impossible) task and having almost no idea about almost any characteristic of this future superintelligence, beyond a list of things we know we don’t want it to do, doesn’t count as an objection of substance;
and that kalla724 being unable to conclusively disprove this is enough reason to dismiss kalla724′s objections in toto.
Have I left anything out, overstated or understated anything here?
If the above is wildly off base, could you please summarise the actual claim in your own words?
Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler’s highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There’s also a lot of clueless objections along lines of “But they won’t just spring back to life when you warm them up” which don’t bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It’s not a question of “fail to disprove”, it’s a question of what happens if you just extrapolate current knowledge at face value without worrying about whether the conclusion sounds weird. Similarly, you can postulate a social collapse which wipes out the infrastructure for liquid nitrogen production, and a cryonics facility could try to further defend against that scenario by having on-premises cooling powered by solar cells… but if you were actually told the US would collapse in 2028, you would have learned a new fact you did not presently know; it’s not a default assumption.
There’s also a lot of clueless objections along lines of “But they won’t just spring back to life when you warm them up” which don’t bear on the key question one way or another.
This is, of course, not anywhere in anything that kalla724 or I said.
However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative.
Thank you, this is a solid claim that current cryonics practice preserves sufficient information (even if we presently have literally no idea how to get it out).
This is, of course, not anywhere in anything that kalla724 or I said.
If you complain about how it would be hard to in-situ repair denatured proteins—instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it’s physically impossible to tell if the starting protein was in conformation X or conformation Y—then you’re complaining about the difficulty of repairing functional damage, i.e., the brain won’t work after you switch it back on, which is completely missing the point.
If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it’s entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn’t previously know (e.g. that the long-term behavior of this synapse was reflected in a distinguishable effect on the chemical balance of nearby glial cells or something). But currently, if we find out that cryonics doesn’t work, we must have learned some new fact of neuroscience about informationally important brain information not visible in vesicle densities, synaptic configurations, and other things that current neuroscience says are important and that we can see preserved in vitrified rat brains.
We don’t have current tech for getting info out. There’s solid foreseeable routes in both nanoimaging and nanodevices. If the molecules are in-place with sufficient resolution, sufficiently advanced and foreseeable future imaging tech or nanomanipulation tech should be able to get the info out. Like, Nanosystems level would definitely be sufficient though not necessary, and those are some fairly detailed calculations, estimates, and toy systems being bandied about.
The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain
Maybe I’m missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.
You’re missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.
I understood a pretty important element in the cryonics argument is assuming that you stick to things that are feasible given our current understanding of physics, though not necessarily given our current level of technology. Conflating technology and physics here will turn the arguments into hash, so it’s kinda important to keep them separate. It’s generally assumed that the future superintelligences will obey laws of physics that will be pretty much what we understand them to be now, although they may apply them to invent technologies we have no idea about. “Things will have to continue working with the same laws of physics they’re working with now” seems different to me from “any random magical stuff can happen because Singularity”, which you seem to be going for here.
I’m not sure if “just don’t break the laws of physics” is strong enough though. Few people think it very feasible that there would be any way to reconstruct a human body locked in a box and burnt to ash, but go abstract enough with the physics and it’s all just a bunch of particles running on neat and reversible trajectories, and maybe some sort of Laplace’s demon contraption could track enough of them and trace them back far enough to get the human persona information back. (Or does this run into Heisenberg uncertainty?)
The “possible physically but not technologically” seems like a rather tricky type of reasoning. Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don’t have the tech to do either yet. But it seems like the key to this argument, and I rarely see people engaging with it. The counterarguments seem to be mostly about either the technology not being there or philosophical arguments about the continuity of the self.
Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don’t have the tech to do either yet.
“As the Soviet geologists got to know the Lykov family, they realized that they had underestimated their abilities and intelligence. Each family member had a distinct personality; Old Karp was usually delighted by the latest innovations that the scientists brought up from their camp, and though he steadfastly refused to believe that man had set foot on the moon, he adapted swiftly to the idea of satellites. The Lykovs had noticed them as early as the 1950s, when “the stars began to go quickly across the sky,” and Karp himself conceived a theory to explain this: “People have thought something up and are sending out fires that are very like stars.”
Note that what I posit as the apparent argument makes no contentions about continuity of self—let’s assume minds can in fact be copied around like MP3s.
Yes, I’m annoyed when people pull out a hypothetical magic-equivalent superintelligence that will make everything all better as an argument so solid that the burden of proof is to disprove it: “we don’t know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point).” They don’t know how to get there from here, but they’re trying really hard, therefore this hypothetical being should be assumed?
“we don’t know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point).”
I just said we’re assuming we know it can’t break the laws of physics.
We can tell that if you blow up someone with antimatter, putting them back together would have to involve breaking the speed of light unless you start out controlling the entire surrounding light cone before the person was blown up. If the person was vitrified, there isn’t a similar obvious violation of laws of physics involved in putting them back together.
So it seems like cryonics after death gives you a better chance at being eventually reanimated than antimatter burial after death. With regular burial definitely leaning towards the antimatter option, the causal stuff that needs to be traced back to get you together gets spread too wide. Yet people still argue as if cryonics should be treated just the same as regular burial as long as there’s no demonstrable technology that shows it working for humans.
I’m not sure why it’s a dealbreaker to assume that the technology side will advance into something we can’t fully anticipate. Today’s technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it’s still based on the laws of physics a physicists from 1900 would be quite familiar with.
Today’s technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it’s still based on the laws of physics a physicists from 1900 would be quite familiar with.
The GPS depends on relativity. And “barring the quantum mechanical bits” is a hell of an overwhelming exception. (But make that “a physicist from 1930″ and I will agree.)
“Warm ’em up and see if they spring back to life” was a possible revival method that cryonicists already didn’t believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.
While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying “that seems very unlikely” and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying “oh, that seems potentially serious”. If I do this more I’ll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don’t think that will change the outcome much.
I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.
unpacking all the disjunctive paths to success into finer and finer subcategories
As far as I can tell there’s really only one path to success, and it’s the one I put here. In my reply to torekp I talked about why I thought in-the-flesh revival was enough less likely not to matter. What would you put as disjunctive paths where “you sign up to get frozen and start playing for it” makes the difference is you being revived?
If any disjunctive paths are serious enough I’m willing to go back and add them to my model.
EDIT retracted: “looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities”. [This was wrong because I was confusing the negative and positive formulations. Robin Hanson’s is positive (which subadditivity should push in the ‘cryonics-likely’ direction) while mine was negative (which subadditivity should push in the ‘cryonics-unlikely’ direction).]
As far as I can tell there’s really only one path to success, and it’s the one I put here.
I raised an alternative path to success when we discussed this Sunday, at the end when you asked for probability of “other failure” and I argued that it should go both ways. Specifically, I suggested that we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them. I don’t remember the probability I gave this at the time, but I believe it was on the order of 10^-2 - small, but still bigger than your bottom-line probability of 1/1500 (which I disagree with) for cryonics working the obvious way.
Some other low-probability paths-to-win that you neglected:
My cryopreservation subscription fees are the marginal research dollars that prevent me from dying in the first place, via a cryonics-related discovery with non-cryonics implications
I am unsuccessfully preserved but my helping cryonics reach scale saves others; a future AI keeps me alive longer because having signed up for cryonics signaled that I value my life more
While my cryopreserved brain is not adequate to resurrect me by itself, it will be combined with electronic records of my life and others’ memories of me to build an approximation of me.
There are also some less-traditional paths-to-lose:
Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else’s)
You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to).
Simulation is possible, but it is for some reason much “thinner” than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn’t happen to die).
You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that’s OK. And it does this to all corpsicles it finds but not to any other dead people.
I have strong opinions of the likeliness of these (I’d put one at p>99% and another at p<1%) but in any case they’re worth mentioning.
Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does.
I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.
Sorry: I edited the comment you were responding to to clarify my intended meaning, and now perhaps the (unintended?) idea you were responding to is no longer there.
looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities.
I could equally make cryonics success sound almost certain
I’d be interested to see someone do that.
There are a lot of variants on this exercise that could be studies in bias. The five of us doing this estimate on the bus, for example, realized that our answers came out clustered while Jeff’s was far away because we had done it together. For each individual question we were supposed to think of our own answer before anyone spoke, to avoid anchoring. But we were anchored by the answers the others had given to all the previous questions.
To me this just looks like a bias-manipulating “unpacking” trick—as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up.
How do you know the raised estimate with this “trick” is worse than the estimate without?
I could just as easily say, “As you merge smaller categories into larger and larger categories, the probability that people assign to the total category goes down.”
To me this just looks like a bias-manipulating “unpacking” trick—as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up. I could equally make cryonics success sound almost certain by lumping all the failure categories together into one or two big things to be probability-assigned, and unpacking all the disjunctive paths to success into finer and finer subcategories. Which I don’t do, because I don’t lie.
Also, yon neuroscientist does not understand the information-theoretic criterion of death.
There’s another effect of “unpacking”, which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.
I wonder if that would actually work, or if the finer granularity basically just trashes the ability of your brain to estimate probabilities.
I think it’s also good to mention that this kind of questionnaire does not account for possible future advancements which are not included due to lack of availability. The same though applies for further negative changes in the future, but when looking at that list for an example items follows are completely missing:
Legislation for improving the safety and conditions of cryopreserved people is passed
Neuroscientists develop new general techniques for restoring function in patients with braindamage
Breakthrough in nanotechnology allows better analysis and faster repair of damaged neurons
Supercomputers can be used to retrace the original condition of modified or damaged brain
Supercomputers (with the help of FAI?) can be used to reconstruct missing data from redundancy(like mentioned above in Benja’s comment )
etc..
..That is to say it’s one thing to ‘unpack’ a proposition and another to do it accurately or at least I would think a questionnaire with uncertain positive and negative future events would seem less biased.
I think it’s also worthwhile to consider the possibility that this unpacking business is an sort of an inverse of conjunction fallacy—although it’s not exactly the same thing, but I think it’s a very closely related topic?
They appear to, they are questioning whether current cryonic practice preserves said information at all—they are saying it will destroy it.
No they’re not, they’re describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can’t information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, “These two cognitively distinct states will map to molecularly indistinguishable end states”. I’m not saying you have to use that exact phrasing but it’s what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.
Are you referring to the neuroscientist’s discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:
In our lingo: the state transformation is a non-injective function (=loss of information).
However, the import of the distance between a “best guess” facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night’s sleep? Before and after a TBI injury (yay pleonasm)?
Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?
Speculatively, I’d rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.
Otherwise, we’d incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.
I thought you meant “neoplasm”, then I actually Googled pleonasm and there’s a good chance you mean that. Which is it???
Heh, pleonasm, since the “I” in the TBI acronym already refers to “injury”, thus rendering the second injury as an overkill. Let’s get side-tracked on that, typical LW style :)
Pleonasm, neoplasm … potato, topota.
kalla724 quotes from the thread:
These appear to be saying just what I thought they were saying—current cryonics practice destroys the information - and, given the above, I don’t see sufficient evidence to assume your reading.
At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:
Irretrievably? I’d be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he’d essentially have to demonstrate that he isn’t thinking like a professional neuroscientist for the purpose of the claim.)
(Those sound like a big deal to a neuroscientist in current practice. Whether they are beyond the theoretical capabilities of a superintelligence to recover? I would bet that the comment author really has no good reason to credibly doubt.)
Rehydration? Removing the cryoprotectant? Assume much? (This itself would be enough to conclude that Kalla is giving a Credible, Professional and Authoratative opinion that can not be questioned… on an entirely different question to the one that actually matters for reasoning about cryonics-with-expected-superintelligence.)
Don’t really work properly, huh? (Someone is missing the point again.)
What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about “information destruction” but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.
I’d like to be absolutely clear on the claim that’s being made here.
If I overstate the claim, understate the claim or even state it in a manner that seems unduly silly, please do correct me—my aim here is to ascertain precisely what the claim being made is.
As I understand it, you are claiming that:
current cryonics practice will preserve sufficient information that a future superintelligence (that we do not presently understand enough about to construct or predict the actions of) may, using unspecified future technologies, be able to use the information in the brain preserved using current cryonics practice to reconstruct the personality that was in said brain at the time of its preservation to a sufficient fidelity that it would count to the personality signing up for such preservation as revival;
that having no idea what technologies the superintelligence might use to perform this (presently apparently physically impossible) task and having almost no idea about almost any characteristic of this future superintelligence, beyond a list of things we know we don’t want it to do, doesn’t count as an objection of substance;
and that kalla724 being unable to conclusively disprove this is enough reason to dismiss kalla724′s objections in toto.
Have I left anything out, overstated or understated anything here?
If the above is wildly off base, could you please summarise the actual claim in your own words?
Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler’s highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There’s also a lot of clueless objections along lines of “But they won’t just spring back to life when you warm them up” which don’t bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It’s not a question of “fail to disprove”, it’s a question of what happens if you just extrapolate current knowledge at face value without worrying about whether the conclusion sounds weird. Similarly, you can postulate a social collapse which wipes out the infrastructure for liquid nitrogen production, and a cryonics facility could try to further defend against that scenario by having on-premises cooling powered by solar cells… but if you were actually told the US would collapse in 2028, you would have learned a new fact you did not presently know; it’s not a default assumption.
This is, of course, not anywhere in anything that kalla724 or I said.
Thank you, this is a solid claim that current cryonics practice preserves sufficient information (even if we presently have literally no idea how to get it out).
If you complain about how it would be hard to in-situ repair denatured proteins—instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it’s physically impossible to tell if the starting protein was in conformation X or conformation Y—then you’re complaining about the difficulty of repairing functional damage, i.e., the brain won’t work after you switch it back on, which is completely missing the point.
If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it’s entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn’t previously know (e.g. that the long-term behavior of this synapse was reflected in a distinguishable effect on the chemical balance of nearby glial cells or something). But currently, if we find out that cryonics doesn’t work, we must have learned some new fact of neuroscience about informationally important brain information not visible in vesicle densities, synaptic configurations, and other things that current neuroscience says are important and that we can see preserved in vitrified rat brains.
We don’t have current tech for getting info out. There’s solid foreseeable routes in both nanoimaging and nanodevices. If the molecules are in-place with sufficient resolution, sufficiently advanced and foreseeable future imaging tech or nanomanipulation tech should be able to get the info out. Like, Nanosystems level would definitely be sufficient though not necessary, and those are some fairly detailed calculations, estimates, and toy systems being bandied about.
Maybe I’m missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.
You’re missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.
I understood a pretty important element in the cryonics argument is assuming that you stick to things that are feasible given our current understanding of physics, though not necessarily given our current level of technology. Conflating technology and physics here will turn the arguments into hash, so it’s kinda important to keep them separate. It’s generally assumed that the future superintelligences will obey laws of physics that will be pretty much what we understand them to be now, although they may apply them to invent technologies we have no idea about. “Things will have to continue working with the same laws of physics they’re working with now” seems different to me from “any random magical stuff can happen because Singularity”, which you seem to be going for here.
I’m not sure if “just don’t break the laws of physics” is strong enough though. Few people think it very feasible that there would be any way to reconstruct a human body locked in a box and burnt to ash, but go abstract enough with the physics and it’s all just a bunch of particles running on neat and reversible trajectories, and maybe some sort of Laplace’s demon contraption could track enough of them and trace them back far enough to get the human persona information back. (Or does this run into Heisenberg uncertainty?)
The “possible physically but not technologically” seems like a rather tricky type of reasoning. Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don’t have the tech to do either yet. But it seems like the key to this argument, and I rarely see people engaging with it. The counterarguments seem to be mostly about either the technology not being there or philosophical arguments about the continuity of the self.
H. G. Wells did it: http://en.wikipedia.org/wiki/The_War_in_the_Air http://en.wikipedia.org/wiki/First_Men_In_The_Moon
Also, people can sometimes do it themselves:
http://www.smithsonianmag.com/history-archaeology/For-40-Years-This-Russian-Family-Was-Cut-Off-From-Human-Contact-Unaware-of-World-War-II-188843001.html
Relevant quote:
Note that what I posit as the apparent argument makes no contentions about continuity of self—let’s assume minds can in fact be copied around like MP3s.
Yes, I’m annoyed when people pull out a hypothetical magic-equivalent superintelligence that will make everything all better as an argument so solid that the burden of proof is to disprove it: “we don’t know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point).” They don’t know how to get there from here, but they’re trying really hard, therefore this hypothetical being should be assumed?
I just said we’re assuming we know it can’t break the laws of physics.
We can tell that if you blow up someone with antimatter, putting them back together would have to involve breaking the speed of light unless you start out controlling the entire surrounding light cone before the person was blown up. If the person was vitrified, there isn’t a similar obvious violation of laws of physics involved in putting them back together.
So it seems like cryonics after death gives you a better chance at being eventually reanimated than antimatter burial after death. With regular burial definitely leaning towards the antimatter option, the causal stuff that needs to be traced back to get you together gets spread too wide. Yet people still argue as if cryonics should be treated just the same as regular burial as long as there’s no demonstrable technology that shows it working for humans.
I’m not sure why it’s a dealbreaker to assume that the technology side will advance into something we can’t fully anticipate. Today’s technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it’s still based on the laws of physics a physicists from 1900 would be quite familiar with.
The GPS depends on relativity. And “barring the quantum mechanical bits” is a hell of an overwhelming exception. (But make that “a physicist from 1930″ and I will agree.)
Heavy functional damage still rules out some possible revival methods, so reduces probability of success.
“Warm ’em up and see if they spring back to life” was a possible revival method that cryonicists already didn’t believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.
The idea that when people disagree over complex topics that they should break their disagreement down is one I’ve learned in part from Robin Hanson and in fact he applies it cryonics
While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying “that seems very unlikely” and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying “oh, that seems potentially serious”. If I do this more I’ll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don’t think that will change the outcome much.
I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.
As far as I can tell there’s really only one path to success, and it’s the one I put here. In my reply to torekp I talked about why I thought in-the-flesh revival was enough less likely not to matter. What would you put as disjunctive paths where “you sign up to get frozen and start playing for it” makes the difference is you being revived?
If any disjunctive paths are serious enough I’m willing to go back and add them to my model.
EDIT retracted: “looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities”. [This was wrong because I was confusing the negative and positive formulations. Robin Hanson’s is positive (which subadditivity should push in the ‘cryonics-likely’ direction) while mine was negative (which subadditivity should push in the ‘cryonics-unlikely’ direction).]
I raised an alternative path to success when we discussed this Sunday, at the end when you asked for probability of “other failure” and I argued that it should go both ways. Specifically, I suggested that we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them. I don’t remember the probability I gave this at the time, but I believe it was on the order of 10^-2 - small, but still bigger than your bottom-line probability of 1/1500 (which I disagree with) for cryonics working the obvious way.
Some other low-probability paths-to-win that you neglected:
My cryopreservation subscription fees are the marginal research dollars that prevent me from dying in the first place, via a cryonics-related discovery with non-cryonics implications
I am unsuccessfully preserved but my helping cryonics reach scale saves others; a future AI keeps me alive longer because having signed up for cryonics signaled that I value my life more
While my cryopreserved brain is not adequate to resurrect me by itself, it will be combined with electronic records of my life and others’ memories of me to build an approximation of me.
There are also some less-traditional paths-to-lose:
Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else’s)
You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to).
Simulation is possible, but it is for some reason much “thinner” than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn’t happen to die).
You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that’s OK. And it does this to all corpsicles it finds but not to any other dead people.
I have strong opinions of the likeliness of these (I’d put one at p>99% and another at p<1%) but in any case they’re worth mentioning.
Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does.
I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.
Sorry: I edited the comment you were responding to to clarify my intended meaning, and now perhaps the (unintended?) idea you were responding to is no longer there.
Whoops; this totally slipped my mind. Thanks for including them here.
Yes, that was the claim.
I’d be interested to see someone do that.
There are a lot of variants on this exercise that could be studies in bias. The five of us doing this estimate on the bus, for example, realized that our answers came out clustered while Jeff’s was far away because we had done it together. For each individual question we were supposed to think of our own answer before anyone spoke, to avoid anchoring. But we were anchored by the answers the others had given to all the previous questions.
How do you know the raised estimate with this “trick” is worse than the estimate without?
I could just as easily say, “As you merge smaller categories into larger and larger categories, the probability that people assign to the total category goes down.”
‘Subadditivity effect’
Which points in the opposite direction.