By “cryogenic preservation”, I mean to say, “long term cryogenic preservation and brain/body reawakening”. This should be clear from context. Anyways...
Yes, it is untested. For this concept to be tested, they would (obviously) need to cryopreserve a human brain/body and then attempt to successfully re-awaken it.
And yes, it is unknown. It is true that cryo-preservation does a better job, than say, “dirt preservation”, i.e., “worm food preservation”. Nevertheless, it is unknown how the resuscitation and repair of the brain would work. Let alone the concept of “brain scanning”, which remains only a pure (albeit alluring) science fiction speculation.
EDIT: I’m sorry, I must admit to being somewhat ignorant about the subject. I’ve just found links to some archives of prior tests. However, in the protocol of “tests with reasonable chance of success”, I stand by my argument.
For this concept to be tested, they would (obviously) need to cryopreserve a human brain/body and then attempt to successfully re-awaken it.
And the fact that mammalian organs have already been successfully cryopreserved and revived doesn’t cause you to reevaluate the chance of revival for humans in the future?
Nevertheless, it is unknown how the resuscitation and repair of the brain would work.
Unknown in the sense that there are many candidate methods that look like they’ll work, but they require advances in computer hardware, materials science, and other fields. The method doesn’t matter. All that matters is that enough information is preserved today so that some future technology can eventually recover you.
Let alone the concept of “brain scanning”, which remains only a pure (albeit alluring) science fiction speculation.
HM)’s brain was cryopreserved and microtomed so that scientists could study it. Better microtomes and microscopy equipment would allow for a brain scan at high enough fidelity for emulation. This example doesn’t require any new inventions, just improvements on existing devices. Are you willing to bet that no future technology will ever be able to reconstruct a mind from a cryopreserved brain?
All the evidence you’re suggesting is indirect and far removed from the actual goal. And the successes of organ/dog/etc. cryonics you keep mentioning, on inspection, have been subject to much more extreme constraints (e.g., extremely limited time, extensive pharmacological life support systems, etc.) than the cryonic preservation methods. However you might want to qualify it, the cryonics establishment is just speculating.
There are many scientific fields that do huge amounts of unsuccessful tests, and never (or after fifty years and counting) solve the problem (e.g., in plasma confinement.) Cryonics hasn’t even been tested.
All the evidence you’re suggesting is indirect and far removed from the actual goal. And the successes of organ/dog/etc. cryonics you keep mentioning, on inspection, have been subject to much more extreme constraints...
You have to test airfoils in wind tunnels before you can build a plane. Isn’t all this stuff evidence in favor of cryonics? And “far removed”? I doubt either of us could tell the difference between rabbit kidney cells and human brain cells under a microscope.
You didn’t answer any of the questions I asked. Maybe I should be more specific. Please tell me which parts of cryonics you think make it unlikely that cryopreserved people will ever be revived:
Do you think that the cryopreservation process itself damages the brain cells enough to destroy the mind?
Do you think that long-term storage at low temperatures damages the brain enough to destroy the mind?
Do you think that no future technology will be able to recover the mind from a cryopreserved brain?
Also, what part of my example do you have a problem with? Do you think there will not be any advances in microtome, microscope, and computing technology?
There are many scientific fields that do huge amounts of unsuccessful tests, and never (or after fifty years and counting) solve the problem (e.g., in plasma confinement.)
Never is a lot longer than 50 years. You mention plasma confinement. JET has a gain of 0.7 and ITER is designed to have a gain of 10. Both are experimental reactors, but they should cause you to doubt that plasma confinement will never work. You can argue that it won’t be as cost-effective as other power generation methods, but if fusion plants brought people back from the dead, they’d be built eventually.
Plasma physics is actually my research area. :) . I was just using that as an example to show that you can’t assume something will work, just because it seems plausible.
I haven’t answered your questions because the burden of proof is on you. It doesn’t show anything to refute my arguments; I don’t claim to be an expert on the subject and your refutation wouldn’t even prove anything, because in the end, you still don’t have any evidence.
It’s been good discussing this (genuine comment) but from my side, this discussion is over. I’m just repeating myself now.
This is a larger issue here than just cryonics, which might be why you’re getting some downvotes. While few of the things we’ve been referring to would be admissible in a courtroom, the sense in which we use the word “evidence” is a bit more general than that. (Do, in particular, read the post linked within the wiki article.)
And it’s not just a matter of idiosyncratic word usage— the way we think of evidence really is better suited to figuring things out than the working definition most people use in order to say things like “There’s no real evidence that people evolved from animals, because you weren’t there to see it happen!”
What we mean is that there are plenty of facts about our physics and our biology that fit into “how we’d expect the world to look if it’s true cryonics works”, but don’t fit into any stated version of “how we’d expect the world to look if cryonics is doomed to failure”. These are, in fact, pieces of evidence that cryonics should work.
It’s the same principle behind Feynman’s lecture “There’s Plenty of Room at the Bottom”, when he discussed nanotechnology in theory. Of course he couldn’t point to full examples of what he was talking about, but it was very valid to say that ‘if physics works the way we think it does, then these things should be quite possible to do’.
It’s the same principle behind Feynman’s lecture “There’s Plenty of Room at the Bottom”, when he discussed nanotechnology in theory. Of course he couldn’t point to full examples of what he was talking about, but it was very valid to say that ‘if physics works the way we think it does, then these things should be quite possible to do’.
I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass. And it’s unfair to compare evolutionary evidence with cryonics evidence, where the former has been explicitly, clearly documented and tested (as in dog breeding), whereas the latter (as I keep repeating myself) has had absolutely no testing whatsoever.
Evidence is required to be explicitly linked, in the general scientific community. Testing like “bringing a dog back from a low-blood-pressure state for 15 minutes while under intensive pharmacological life support” does not qualify that human cryonic regeneration will work. It qualifies a certain optimism, but (in my mind) this optimism appears to have taken over in the kind of context of the discussion we’re having here.
In the physics community, or in the mainstream scientific community in general, the cryonics argument doesn’t pass, for these reasons. (You might argue that the mainstream scientific community is just ignorant to the cryonics idea, but again, that’s a testament to the same conclusion.)
The reason for the difference in “evidence” is because test-evidence is crucial for scientific justification. There’s a deep, wide difference between strong plausibility and demonstrated truth. That’s why, for example, the whole GAI and singularity science receives so little attention. It’s all extremely plausible, but plausibility alone doesn’t count. It doesn’t actually count for very much, and that’s why this is a forum, not an academic conference. To me, it definitely doesn’t count for paying hundreds or thousands of dollars—and that’s assuming that the price doesn’t jump.
For the cryonics argument, the assumption leaps that you require to go from plausibility-to-fact include, for example, nanotechnological repair techniques, which actually don’t even exist in any form today. You can argue about plausibility all you want (like AngryParsley) has done, but that’s only effective insofar as demonstrating further plausibility. It doesn’t actually take you anywhere, and it doesn’t provide any evidence of the sort that you need to do real science (i.e., publish papers.)
And I appreciate your concern about my down votes, but it’s OK, I think I’m happily doomed to a constant zero karma status.
Then you are, in a technical sense, crazy. The rules of probability theory are rules, not suggestions. No one gets to modify what counts as “rational evidence”, not even science; it’s given by Bayes’s Theorem. If you make judgments based on other criteria like “not believing anything strange can happen, regardless of what existing theories predict when extrapolated, unless I see definite confirmation, regardless of the reasonableness of expecting definite confirmation right now even given that the underlying theory is correct”, you’ll end up with systematically wrong answers. Same thing if you try to decide in defiance of expected utility, using a rule like “ignore the stakes if they get large enough” or “ignore any outcome that has not received definite advance confirmation regardless of its probability under a straight extrapolation of modern science”.
I very easily believe that you think I’m crazy. Then again, you’re the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.
Your idea of “data extrapolation” is drastically different from mine. I’m using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way “superior”, it does have a much higher degree of certainty, and I certainly don’t feel that that’s somehow “crazy”.
If I’m crazy, then the entire scientific community is crazy. All I’m doing is requiring the standard level of data before I believe someone’s claim to be correct. And this standard is very, very fruitful, I might add.
Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.
I don’t see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.
Evidence is knowledge that supports a conclusion. It isn’t anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.
Ironically, you’re mistake my assertion in the same manner that you think I’m mistaking others’. It’s correct people aren’t claiming certainty of beliefs, but neither am I. In fact, I’m supporting uncertainty in the probabilities, and that’s all.
I haven’t once claimed that cryonics won’t work, or can’t work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven’t even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.
You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.
One’s actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.
It’s not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn’t excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like “dying needlessly” on the one hand and “wasting money pointlessly” on the other.
You might legitimately come to the conclusion that you don’t like the odds when compared with the present cost and with respect to your priorities in life. But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
I might be missing a bit of jargon here, but what would it mean for a heuristic to be “unsound”? Do you mean it performs badly against other competing heuristics?
It sounds to me like a heuristic that should work pretty well, and I’m not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as “gambling money is bad”.
Uncertainties in the probabilities can’t be absorbed into a single, more conservative probability? If I’m 10% sure that someone’s estimate that cryonics has a 50% likelihood of working is well calibrated, isn’t that the same as being 5% sure that cryonics is likely to work?
My calculation was simplistic, you’re right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person’s estimate is well calibrated or underconfident.
ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.
The point I’m making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn’t be epsilon any more than the expectation of the lottery value need be epsilon.
Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you’ve wasted your $5. Do you take the option?
There is—as far as I know—no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory—a decision very similar to the hypothetical monetary option.
devoting your life to philosophy in the quest of achieving a technical, non-philosphical goal
Less Wrong reads like philosophy, but that’s because it’s made up of words. As I understand it, Eliezer’s real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.
There’s a deep, wide difference between strong plausibility and demonstrated truth.
This right here is the root of the disagreement. There is no qualitative difference between a “strong plausibility” and a “demonstrated truth”. Beliefs can only have degrees of plausibility (aka probability) and these degrees never attain certainty. A “demonstrated truth” is just a belief whose negation is extremely implausible.
The problem is that what you call “strong plausibility” is entirely subjective until you bring evidence onto the table, and that’s the point I’m trying to make.
Edit: In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don’t believe either one until I see some evidence.
The problem is that what you call “strong plausibility” is entirely subjective until you bring evidence onto the table, and that’s the point I’m trying to make.
A piece of evidence regarding a claim is any information, of whatever sort, that ought to affect the plausibility (aka probability) that you assign to the claim. If that evidence makes the claim “strongly plausible”, then that’s an objective fact about how the laws of probability require you to update on the evidence. There is nothing subjective about it.
In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don’t believe either one until I see some evidence.
If you thought that garden gnomes were strongly plausible, then either
you had the misfortune to start with a very inaccurate prior,
you had the misfortune to be exposed to highly misleading evidence, or, most likely,
you failed to apply the rules of probable inference correctly to the evidence you received.
None of these need be the case for you to think that cryonics is plausible enough to justify the expense. That is the difference between cryonics and garden gnomes.
And I appreciate your concern about my down votes, but it’s OK, I think I’m happily doomed to a constant zero karma status.
So that’s why you chose your moniker...
Seriously, though, I doubt it. You’re a contrarian here on cryonics and AI, but ISTM most of the downvotes have been due to misinterpretation or miscommunication (in both directions, but it’s easier for you to learn our conversational idiosyncrasies than vice versa) rather than mere disagreement. As soon as you get involved in some discussions on other topics and grow more accustomed to the way we think and write, your karma will probably drift up whether you like it or not.
I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass.
Yes, that’s very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.
But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn’t be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.
Do you think that the cryopreservation process itself damages the brain cells enough to destroy the mind?
No. Cryopreserving some tissues is common today. Organs from mammals have already been cryopreserved and transplanted. Electron micrographs of cryopreserved brain tissue show almost no degradation. There is an issue with microfractures, but the amount of information destroyed by them is minimal. The fractures are very few and translate tissue along a plane by a few microns. The other problem is ischemic injury. This is mitigated by having standby procedures and ID tags with instructions on how to begin cooling the body. Brain cells don’t immediately die when deprived of oxygen, but they do start an ischemic cascade that can’t be prevented currently. The cascade takes several hours at normal body temperature, and is drastically extended if the body is brought close to freezing.
Do you think that long-term storage at low temperatures damages the brain enough to destroy the mind?
No. Biochemistry is basically stopped at 77K. The only degradation occurs from free radicals caused by cosmic rays. About 0.3 milliSieverts is absorbed each year. The LD50 for acute radiation poisoning is 3-4 Sieverts with current medicine. That’s at least 10,000 years of cosmic rays. The human body itself contains some radioactive isotopes, but if you do the math it’s still well over 1,000 years before radiation poisoning is an issue.
Do you think that no future technology will be able to recover the mind from a cryopreserved brain?
No. See my comment about microtomes, microscopes, and brain emulation. That’s just using slightly improved technology. A superintelligence with molecular nanotechnology certainly wouldn’t have a problem reviving a corpsicle.
Whew, that was a lot of text and links. Once you said the discussion was over I couldn’t resist.
Nice term. I hadn’t heard it before. I read now that I am evidently supposed to find the term offensively pejorative. But I’ll take tongue in cheek humour over dignity any day of the week.
I would like to check something. I naively imagine* that if you freeze a person cryonics-style, no damage is done to their brain immediately. So that if you froze a person that was alive, and then un-thawed them 5 minutes after you froze them, they’d wake up virtually the same—alive and unharmed. Is this true?
*This idea is based on stories about children frozen in lakes and optimism.
“Children frozen in lakes” are not frozen, only hypothermic. If you actually get ice in your brain cells you die.
“Freezing” a person “cryonics-style” begins with drainging all the blood from their head, and the process takes more than a few minutes. So it is pretty much guaranteed to cause severe brain damage or death if you do it to a live person. Which means that it would not be legal, even if somebody had done that experiment they could not publish it.
I don’t think so. Vitrification and the chemicals used are poisonous, but fixing the toxic damage is presumed to be one of the easier steps in reviving someone’s vitrified brain.
This might be true for some definition of freezing a person but not with the protocols currently used by Alcor and CI.
By “cryogenic preservation”, I mean to say, “long term cryogenic preservation and brain/body reawakening”. This should be clear from context. Anyways...
Yes, it is untested. For this concept to be tested, they would (obviously) need to cryopreserve a human brain/body and then attempt to successfully re-awaken it.
And yes, it is unknown. It is true that cryo-preservation does a better job, than say, “dirt preservation”, i.e., “worm food preservation”. Nevertheless, it is unknown how the resuscitation and repair of the brain would work. Let alone the concept of “brain scanning”, which remains only a pure (albeit alluring) science fiction speculation.
EDIT: I’m sorry, I must admit to being somewhat ignorant about the subject. I’ve just found links to some archives of prior tests. However, in the protocol of “tests with reasonable chance of success”, I stand by my argument.
And the fact that mammalian organs have already been successfully cryopreserved and revived doesn’t cause you to reevaluate the chance of revival for humans in the future?
Unknown in the sense that there are many candidate methods that look like they’ll work, but they require advances in computer hardware, materials science, and other fields. The method doesn’t matter. All that matters is that enough information is preserved today so that some future technology can eventually recover you.
HM)’s brain was cryopreserved and microtomed so that scientists could study it. Better microtomes and microscopy equipment would allow for a brain scan at high enough fidelity for emulation. This example doesn’t require any new inventions, just improvements on existing devices. Are you willing to bet that no future technology will ever be able to reconstruct a mind from a cryopreserved brain?
All the evidence you’re suggesting is indirect and far removed from the actual goal. And the successes of organ/dog/etc. cryonics you keep mentioning, on inspection, have been subject to much more extreme constraints (e.g., extremely limited time, extensive pharmacological life support systems, etc.) than the cryonic preservation methods. However you might want to qualify it, the cryonics establishment is just speculating.
There are many scientific fields that do huge amounts of unsuccessful tests, and never (or after fifty years and counting) solve the problem (e.g., in plasma confinement.) Cryonics hasn’t even been tested.
You have to test airfoils in wind tunnels before you can build a plane. Isn’t all this stuff evidence in favor of cryonics? And “far removed”? I doubt either of us could tell the difference between rabbit kidney cells and human brain cells under a microscope.
You didn’t answer any of the questions I asked. Maybe I should be more specific. Please tell me which parts of cryonics you think make it unlikely that cryopreserved people will ever be revived:
Do you think that the cryopreservation process itself damages the brain cells enough to destroy the mind?
Do you think that long-term storage at low temperatures damages the brain enough to destroy the mind?
Do you think that no future technology will be able to recover the mind from a cryopreserved brain?
Also, what part of my example do you have a problem with? Do you think there will not be any advances in microtome, microscope, and computing technology?
Never is a lot longer than 50 years. You mention plasma confinement. JET has a gain of 0.7 and ITER is designed to have a gain of 10. Both are experimental reactors, but they should cause you to doubt that plasma confinement will never work. You can argue that it won’t be as cost-effective as other power generation methods, but if fusion plants brought people back from the dead, they’d be built eventually.
Plasma physics is actually my research area. :) . I was just using that as an example to show that you can’t assume something will work, just because it seems plausible.
I haven’t answered your questions because the burden of proof is on you. It doesn’t show anything to refute my arguments; I don’t claim to be an expert on the subject and your refutation wouldn’t even prove anything, because in the end, you still don’t have any evidence.
It’s been good discussing this (genuine comment) but from my side, this discussion is over. I’m just repeating myself now.
This is a larger issue here than just cryonics, which might be why you’re getting some downvotes. While few of the things we’ve been referring to would be admissible in a courtroom, the sense in which we use the word “evidence” is a bit more general than that. (Do, in particular, read the post linked within the wiki article.)
And it’s not just a matter of idiosyncratic word usage— the way we think of evidence really is better suited to figuring things out than the working definition most people use in order to say things like “There’s no real evidence that people evolved from animals, because you weren’t there to see it happen!”
What we mean is that there are plenty of facts about our physics and our biology that fit into “how we’d expect the world to look if it’s true cryonics works”, but don’t fit into any stated version of “how we’d expect the world to look if cryonics is doomed to failure”. These are, in fact, pieces of evidence that cryonics should work.
It’s the same principle behind Feynman’s lecture “There’s Plenty of Room at the Bottom”, when he discussed nanotechnology in theory. Of course he couldn’t point to full examples of what he was talking about, but it was very valid to say that ‘if physics works the way we think it does, then these things should be quite possible to do’.
See also Eliezer’s post Is Molecular Nanotechnology “Scientific”? (also found at the wiki article you gave — that wiki’s really starting to shape up).
I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass. And it’s unfair to compare evolutionary evidence with cryonics evidence, where the former has been explicitly, clearly documented and tested (as in dog breeding), whereas the latter (as I keep repeating myself) has had absolutely no testing whatsoever.
Evidence is required to be explicitly linked, in the general scientific community. Testing like “bringing a dog back from a low-blood-pressure state for 15 minutes while under intensive pharmacological life support” does not qualify that human cryonic regeneration will work. It qualifies a certain optimism, but (in my mind) this optimism appears to have taken over in the kind of context of the discussion we’re having here.
In the physics community, or in the mainstream scientific community in general, the cryonics argument doesn’t pass, for these reasons. (You might argue that the mainstream scientific community is just ignorant to the cryonics idea, but again, that’s a testament to the same conclusion.)
The reason for the difference in “evidence” is because test-evidence is crucial for scientific justification. There’s a deep, wide difference between strong plausibility and demonstrated truth. That’s why, for example, the whole GAI and singularity science receives so little attention. It’s all extremely plausible, but plausibility alone doesn’t count. It doesn’t actually count for very much, and that’s why this is a forum, not an academic conference. To me, it definitely doesn’t count for paying hundreds or thousands of dollars—and that’s assuming that the price doesn’t jump.
For the cryonics argument, the assumption leaps that you require to go from plausibility-to-fact include, for example, nanotechnological repair techniques, which actually don’t even exist in any form today. You can argue about plausibility all you want (like AngryParsley) has done, but that’s only effective insofar as demonstrating further plausibility. It doesn’t actually take you anywhere, and it doesn’t provide any evidence of the sort that you need to do real science (i.e., publish papers.)
And I appreciate your concern about my down votes, but it’s OK, I think I’m happily doomed to a constant zero karma status.
Then you are, in a technical sense, crazy. The rules of probability theory are rules, not suggestions. No one gets to modify what counts as “rational evidence”, not even science; it’s given by Bayes’s Theorem. If you make judgments based on other criteria like “not believing anything strange can happen, regardless of what existing theories predict when extrapolated, unless I see definite confirmation, regardless of the reasonableness of expecting definite confirmation right now even given that the underlying theory is correct”, you’ll end up with systematically wrong answers. Same thing if you try to decide in defiance of expected utility, using a rule like “ignore the stakes if they get large enough” or “ignore any outcome that has not received definite advance confirmation regardless of its probability under a straight extrapolation of modern science”.
I very easily believe that you think I’m crazy. Then again, you’re the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.
Your idea of “data extrapolation” is drastically different from mine. I’m using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way “superior”, it does have a much higher degree of certainty, and I certainly don’t feel that that’s somehow “crazy”.
If I’m crazy, then the entire scientific community is crazy. All I’m doing is requiring the standard level of data before I believe someone’s claim to be correct. And this standard is very, very fruitful, I might add.
Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.
I don’t see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.
Evidence is knowledge that supports a conclusion. It isn’t anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.
Ironically, you’re mistake my assertion in the same manner that you think I’m mistaking others’. It’s correct people aren’t claiming certainty of beliefs, but neither am I. In fact, I’m supporting uncertainty in the probabilities, and that’s all.
I haven’t once claimed that cryonics won’t work, or can’t work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven’t even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.
You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.
One’s actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.
It’s not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn’t excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like “dying needlessly” on the one hand and “wasting money pointlessly” on the other.
You might legitimately come to the conclusion that you don’t like the odds when compared with the present cost and with respect to your priorities in life. But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
A good point.
I might be missing a bit of jargon here, but what would it mean for a heuristic to be “unsound”? Do you mean it performs badly against other competing heuristics?
It sounds to me like a heuristic that should work pretty well, and I’m not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as “gambling money is bad”.
Uncertainties in the probabilities can’t be absorbed into a single, more conservative probability? If I’m 10% sure that someone’s estimate that cryonics has a 50% likelihood of working is well calibrated, isn’t that the same as being 5% sure that cryonics is likely to work?
You’re assigning 0% probability to (cryonics_working|estimate_miscalibrated). Therefore you should buy the lottery ticket.
My calculation was simplistic, you’re right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person’s estimate is well calibrated or underconfident.
ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.
The point I’m making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn’t be epsilon any more than the expectation of the lottery value need be epsilon.
I agree. That’s why I said so previously =P
Potentially more—perhaps their process for calibration is poor, but the answer coincidentally happens to be right.
Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you’ve wasted your $5. Do you take the option?
There is—as far as I know—no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory—a decision very similar to the hypothetical monetary option.
Less Wrong reads like philosophy, but that’s because it’s made up of words. As I understand it, Eliezer’s real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.
This right here is the root of the disagreement. There is no qualitative difference between a “strong plausibility” and a “demonstrated truth”. Beliefs can only have degrees of plausibility (aka probability) and these degrees never attain certainty. A “demonstrated truth” is just a belief whose negation is extremely implausible.
The problem is that what you call “strong plausibility” is entirely subjective until you bring evidence onto the table, and that’s the point I’m trying to make.
Edit: In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don’t believe either one until I see some evidence.
A piece of evidence regarding a claim is any information, of whatever sort, that ought to affect the plausibility (aka probability) that you assign to the claim. If that evidence makes the claim “strongly plausible”, then that’s an objective fact about how the laws of probability require you to update on the evidence. There is nothing subjective about it.
If you thought that garden gnomes were strongly plausible, then either
you had the misfortune to start with a very inaccurate prior,
you had the misfortune to be exposed to highly misleading evidence, or, most likely,
you failed to apply the rules of probable inference correctly to the evidence you received.
None of these need be the case for you to think that cryonics is plausible enough to justify the expense. That is the difference between cryonics and garden gnomes.
So that’s why you chose your moniker...
Seriously, though, I doubt it. You’re a contrarian here on cryonics and AI, but ISTM most of the downvotes have been due to misinterpretation or miscommunication (in both directions, but it’s easier for you to learn our conversational idiosyncrasies than vice versa) rather than mere disagreement. As soon as you get involved in some discussions on other topics and grow more accustomed to the way we think and write, your karma will probably drift up whether you like it or not.
Yes, that’s very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.
But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn’t be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.
Burden of proof? Did you look at the giant amount of information already written on cryonics?
Fine, here are my answers to my own questions.
No. Cryopreserving some tissues is common today. Organs from mammals have already been cryopreserved and transplanted. Electron micrographs of cryopreserved brain tissue show almost no degradation. There is an issue with microfractures, but the amount of information destroyed by them is minimal. The fractures are very few and translate tissue along a plane by a few microns. The other problem is ischemic injury. This is mitigated by having standby procedures and ID tags with instructions on how to begin cooling the body. Brain cells don’t immediately die when deprived of oxygen, but they do start an ischemic cascade that can’t be prevented currently. The cascade takes several hours at normal body temperature, and is drastically extended if the body is brought close to freezing.
No. Biochemistry is basically stopped at 77K. The only degradation occurs from free radicals caused by cosmic rays. About 0.3 milliSieverts is absorbed each year. The LD50 for acute radiation poisoning is 3-4 Sieverts with current medicine. That’s at least 10,000 years of cosmic rays. The human body itself contains some radioactive isotopes, but if you do the math it’s still well over 1,000 years before radiation poisoning is an issue.
No. See my comment about microtomes, microscopes, and brain emulation. That’s just using slightly improved technology. A superintelligence with molecular nanotechnology certainly wouldn’t have a problem reviving a corpsicle.
Whew, that was a lot of text and links. Once you said the discussion was over I couldn’t resist.
Nice term. I hadn’t heard it before. I read now that I am evidently supposed to find the term offensively pejorative. But I’ll take tongue in cheek humour over dignity any day of the week.
I guess as with other epithets, it’s acceptable for members of the slighted group to use the term amongst themselves. :)
There’s a minimum budget Sci. Fic. movie for you right there!
“What up Corpsicle?”
“Me! Mwahahahaha!”
I would like to check something. I naively imagine* that if you freeze a person cryonics-style, no damage is done to their brain immediately. So that if you froze a person that was alive, and then un-thawed them 5 minutes after you froze them, they’d wake up virtually the same—alive and unharmed. Is this true?
*This idea is based on stories about children frozen in lakes and optimism.
“Children frozen in lakes” are not frozen, only hypothermic. If you actually get ice in your brain cells you die.
“Freezing” a person “cryonics-style” begins with drainging all the blood from their head, and the process takes more than a few minutes. So it is pretty much guaranteed to cause severe brain damage or death if you do it to a live person. Which means that it would not be legal, even if somebody had done that experiment they could not publish it.
I don’t think so. Vitrification and the chemicals used are poisonous, but fixing the toxic damage is presumed to be one of the easier steps in reviving someone’s vitrified brain.
This might be true for some definition of freezing a person but not with the protocols currently used by Alcor and CI.