My estimate of the probabilities involved in calculating the payoff from cryonics differs from your estimates. I do not think it follows that I am a bad parent.
Suppose your child dies. Afterward, everyone alive at the time of a Friendly intelligence explosion plus the tiny handful signed up for cryonics, live happily ever after. Would you say in retrospect that you’d been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?
After all, your child could die in a car crash on a shopping trip, and yet taking them along on that shopping trip could still have been the best possible choice given the statistical information that you had. Is that the plea you would make in the above event? What probabilities do you assign?
Would you say in retrospect that you’d been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?
I reject your framing. I would say that I had made a bad mistake. Errors do not a bad parent make. Or, to put it another way, suppose you woke up in the Christian Hell; would you plead that you had made the best decision on the available information? Scary what-ifs are no argument. You cannot make me reconsider a probability assignment by pointing out the bad consequences if my assessment is wrong; you can only do so by adding information. I understand that you believe you’re trying to save my life, but please be aware that turning to the Dark Side to do so is not likely to impress me; if you need the power of the Dark Side, how good can your argument be, anyway?
What probabilities do you assign?
The brain’s functioning depends on electric and chemical potentials internal to the cells as well as connections between the cells. I believe that cryonics can maintain the network, but not the internal state of the nodes; consequently I assign “too low to meaningfully consider” to the probability of restoring my personality from my frozen brain. If the technology improves, I will reconsider.
Edit: I should specify that right now I have no children, lest I be misunderstood. It seems quite possible I will have some in the near future, though.
Or, to put it another way, suppose you woke up in the Christian Hell; would you plead that you had made the best decision on the available information?
Hell yes.
You cannot make me reconsider a probability assignment by pointing out the bad consequences if my assessment is wrong; you can only do so by adding information.
One way of assessing probabilities is to ask how indignant we have a right to be if reality contradicts us. I would be really indignant if contradicted by reality about Christianity being correct. How indignant would you be if Reality comes back and says, “Sorry, cryonics worked”? My understanding is that dogs have been cooled to the point of cessation of brain activity and revived with no detected loss of memory, though I’d have to look up the reference… if that will actually convince you to sign up for cryonics; otherwise, please state your true rejection.
Conclusions: In a systematic series of studies in dogs, the rapid induction of profound cerebral hypothermia (tympanic temperature 10°C) by aortic flush of cold saline immediately after the start of exsanguination cardiac arrest-which rarely can be resuscitated effectively with current methods-can achieve survival without functional or histologic brain damage, after cardiac arrest no-flow of 60 or 90 mins and possibly 120 mins. The use of additional preservation strategies should be pursued in the 120-min arrest model.
If even a percent or two of parents didn’t make predictable errors we would have probably reached a Friendly Singularity ages ago. That’s a very high standard. If only parents who met it reproduced the species would rapidly have gone extinct.
How indignant would you be if Reality comes back and says, “Sorry, cryonics worked”?
I don’t think this is really the issue. If I make a bet in poker believing (correctly given the available information) that the odds are in my favour but I go on to lose the hand I am not indignant—I was perfectly aware I was taking a calculated risk. In retrospect I should have folded but I still made the right decision at the time. Making the best decision given the available information doesn’t mean making the retrospectively correct decision.
I haven’t yet reached the point where cryonics crosses my risk/reward threshold. It is on my list of ‘things to keep an eye on and potentially change my position in light of new information’ however.
If you make a bet in poker believing that you have .6 chance of winning, and you lose, I believe your claim that you will not be indignant. In this case you have a weak belief that you will win. But, if you lose bets with the same probability 10 times in row, would you feed indignant? Would you question your assumptions and calculations that led to the .6 probability?
If it turns out the cryonics works, would you be surprised? Would you have to question any beliefs that influence your current view of it?
Yes, at some point if I kept seeing unexpected outcomes in poker I would begin to wonder if the game was fixed somehow. I’m open to changing my view of whether cryonics is worthwhile in light of new evidence as well.
I wouldn’t be hugely surprised if at some point in the next 50 years someone is revived after dying and being frozen. My doubts are less related to the theoretical possibilities of reviving someone and more to the practical realities and cost/benefit vs. other uses of my available resources.
I believe that cryonics can maintain the network, but not the internal state of the nodes; consequently I assign “too low to meaningfully consider” to the probability of restoring my personality from my frozen brain.
There is experimental evidence to allay that specific concern. People have had flat EEGs (from barbituate poisoning, and from (non-cryogenic!) hypothermia). They’ve been revived with memories and personalities intact. The network, not transient electrical state, holds long term information. (Oops, partial duplication of Eliezer’s post below—I’m reasonably sure this has happened to humans as well, though...) (found the canine article: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1476969/)
So, how indignant are you feeling right now? Serious question.
Not at all, on the grounds that I do not agree with this sentence:
Will you suspect the forces that previously led you to come up with this objection, since they’ve been proven wrong?
You are way overestimating the strength of your evidence, here; and I’m sorry, but this is not a subject I trust you to be rational about, because you clearly care far too much. There is a vast difference between “cold enough for cessation of brain activity” (not even below freezing!) and “liquid bloody nitrogen”; there is a difference between human brains and dog brains; there is a difference between 120 minutes and 120 years; there is a difference between the controlled conditions of a laboratory, and real-life accident or injury.
That said, this is a promising direction of research for convincing me. How’s this? If a dog is cooled below freezing, left there for 24 hours, and then revived, I will sign up for cryonics. Cross my heart and hope not to die.
If it turns out the cryonics works, would you be surprised?
If it turns out that cryonics as practised in 2010 works, then yes, I would be surprised. I would not be particularly suprised if a similar technology can be made to work in the future; I don’t object to the proposition that information is information and the brain is un-magical, only to the overconfidence in today’s methods of preserving that information. In any case, though, I can’t very well update on predicted future surprises, can I now?
Since you expect some future cryonics tech to be successful, there’s a strong argument that you should sign up now: you can expect to be frozen with the state of the art at the time of your brain death, not 2010 technology, and if you put it off, your window of opportunity may close.
Disclosure: I am not signed up for cryonics (but the discussion of the past few days has convinced me that I ought to).
How high a probability do you place on the information content of the brain depending on maintaining electrochemical potentials? Why? Why do you think your information and analysis are better than those of those who disagree?
In order: 90%; because personality seems to me state-ful (that is, there is clearly some sort of long-term storage with quite rapid (relative to nerve growth) writing going on, which seems to me hard to explain purely in terms of the interconnections), and a neural network with no activation information in the nodes will not respond to a given input in the same way as the same network with some excited nodes; and because you have not given a convincing counterargument nor a convincing appeal to expertise.
Certainly the internal state of a neuron includes things that are preserved by uploading other than the wiring diagram. Anyway, are you doing a calculation where another factor of 10 makes a critical difference?
Uploading, yes; but we were discussing cryonics. Uploading is a completely different question. Indeed, I would assign a rather higher probability to uploading preserving personality, than to cryonics doing so.
And yes, I generally expect orders of magnitude to make a difference. If they don’t, then your uncertainty is so large anyway that attempting a fake precision is just fooling yourself.
Although… actually… it occurs to me that you could move the order of magnitude somewhere else. Suppose I kept your probability estimate of cryonics working, and multiplied the price by ten? Even by twenty? … That does make a pretty fair chunk of my budget, but still. I think I’ll have to revisit that calculation.
Not sure what exactly you mean by the “internal state of the nodes.” If you are referring to inside the individual brain cells, then I think you’re mistaken. We can already peer into the inside of neurons. Transmission electron microscopy is a powerful technology! Combine it with serial sectioning with a diamond knife and you can get quite a lot of detail in quite a large amount of tissue.
For example consider Ragsdale et al’s recent study, to pick the first sstem scopus result. They looked at some sensory neurons in C. elegans, and were able to identify not just internal receptors but also which cells (sheath cells) contain abundant endoplasmic reticulum, secretory granules, and/or lipid globules.
This whole discussion comes down to what level of scale separation you might need to recapitulate the function of the brain and the specific characteristics that make you you. Going down to say the atomic level would probably be very difficult, for instance. But there’s good reason to think that we won’t have to go nearly that far down to reproduce human characteristics. Have you read the pdf roadmap? No reason to form beliefs w/o the relevant knowledge! :)
You are responding to a point somewhat at angles to the one I made. Yes, we can learn a lot about the internal state of brain cells using modern technology. It does not follow that such state survives long-term storage at liquid-nitrogen temperatures.
Suppose your child dies. Afterward, everyone alive at the time of an unFriendly intelligence explosion plus the tiny handful signed up for cryonics (including your child), also dies. Would you say in retrospect that you’d been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?
I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.
I have no child; this is not coincidence. If I did have a kid you can damn well better believe that kid would be signed up for cryonics or I wouldn’t be able to sleep.
I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.
I’ll accept that excuse for your not being signed up yourself—though I’m rather skeptical until I see the donation receipt. I will not accept that excuse for your child not being signed up. I’ll accept it as an excuse for not having a child, but not as an excuse for having a child and then not signing them up for cryonics. Take it out of the movie budget, not the existential risks budget.
I don’t believe in excuses, I believe that signing up for cryonics is less rational than donating to prevent existential risks. For somewhat related reasons, I do not intend to have children.
I strongly advise you to immediately start donating something to somewhere, even if it’s $10/year to Methuselah. If there’s one thing you learn working in the nonprofit world, it’s that people who donated last year are likely to also donate this year, and people who last year planned to donate “next year” will this year be planning to donate “next year”.
Upon hearing this advice, I just donated $10 to SIAI, even though I consider this amount totally insignificant relative to my expected future donations. I will upvote anyone who does the same for any transhumanist charity.
Do you have an estimate of how much a new donor to SIAI is worth above and beyond their initial donation? How about given that I ask them to donate with money they were about to repay me anyway?
If it’s significant it could be well worth the social capital to spread your own donations among non-donor friends.
I figured that was covered by ‘much less good’; there are a lot of costs to delaying, if we wanted to enumerate them—risks of good charities going under, inflation and catastrophic economic events gnawing away at one’s stored value, the ever-present existential risks each year, etc.
My estimate of the probabilities involved in calculating the payoff from cryonics differs from your estimates. I do not think it follows that I am a bad parent.
Suppose your child dies. Afterward, everyone alive at the time of a Friendly intelligence explosion plus the tiny handful signed up for cryonics, live happily ever after. Would you say in retrospect that you’d been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?
After all, your child could die in a car crash on a shopping trip, and yet taking them along on that shopping trip could still have been the best possible choice given the statistical information that you had. Is that the plea you would make in the above event? What probabilities do you assign?
I reject your framing. I would say that I had made a bad mistake. Errors do not a bad parent make. Or, to put it another way, suppose you woke up in the Christian Hell; would you plead that you had made the best decision on the available information? Scary what-ifs are no argument. You cannot make me reconsider a probability assignment by pointing out the bad consequences if my assessment is wrong; you can only do so by adding information. I understand that you believe you’re trying to save my life, but please be aware that turning to the Dark Side to do so is not likely to impress me; if you need the power of the Dark Side, how good can your argument be, anyway?
The brain’s functioning depends on electric and chemical potentials internal to the cells as well as connections between the cells. I believe that cryonics can maintain the network, but not the internal state of the nodes; consequently I assign “too low to meaningfully consider” to the probability of restoring my personality from my frozen brain. If the technology improves, I will reconsider.
Edit: I should specify that right now I have no children, lest I be misunderstood. It seems quite possible I will have some in the near future, though.
Predictable errors do.
Hell yes.
One way of assessing probabilities is to ask how indignant we have a right to be if reality contradicts us. I would be really indignant if contradicted by reality about Christianity being correct. How indignant would you be if Reality comes back and says, “Sorry, cryonics worked”? My understanding is that dogs have been cooled to the point of cessation of brain activity and revived with no detected loss of memory, though I’d have to look up the reference… if that will actually convince you to sign up for cryonics; otherwise, please state your true rejection.
http://74.125.155.132/scholar?q=cache:ZNOvlaxp0p8J:scholar.google.com/&hl=en&as_sdt=2000
If even a percent or two of parents didn’t make predictable errors we would have probably reached a Friendly Singularity ages ago. That’s a very high standard. If only parents who met it reproduced the species would rapidly have gone extinct.
I don’t think this is really the issue. If I make a bet in poker believing (correctly given the available information) that the odds are in my favour but I go on to lose the hand I am not indignant—I was perfectly aware I was taking a calculated risk. In retrospect I should have folded but I still made the right decision at the time. Making the best decision given the available information doesn’t mean making the retrospectively correct decision.
I haven’t yet reached the point where cryonics crosses my risk/reward threshold. It is on my list of ‘things to keep an eye on and potentially change my position in light of new information’ however.
If you make a bet in poker believing that you have .6 chance of winning, and you lose, I believe your claim that you will not be indignant. In this case you have a weak belief that you will win. But, if you lose bets with the same probability 10 times in row, would you feed indignant? Would you question your assumptions and calculations that led to the .6 probability?
If it turns out the cryonics works, would you be surprised? Would you have to question any beliefs that influence your current view of it?
Yes, at some point if I kept seeing unexpected outcomes in poker I would begin to wonder if the game was fixed somehow. I’m open to changing my view of whether cryonics is worthwhile in light of new evidence as well.
I wouldn’t be hugely surprised if at some point in the next 50 years someone is revived after dying and being frozen. My doubts are less related to the theoretical possibilities of reviving someone and more to the practical realities and cost/benefit vs. other uses of my available resources.
There is experimental evidence to allay that specific concern. People have had flat EEGs (from barbituate poisoning, and from (non-cryogenic!) hypothermia). They’ve been revived with memories and personalities intact. The network, not transient electrical state, holds long term information. (Oops, partial duplication of Eliezer’s post below—I’m reasonably sure this has happened to humans as well, though...) (found the canine article: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1476969/)
So, how indignant are you feeling right now? Serious question.
Will you suspect the forces that previously led you to come up with this objection, since they’ve been proven wrong?
Will you hesitate to make a similar snap decision without looking up sources or FAQs the next time your child’s life is at stake?
Not at all, on the grounds that I do not agree with this sentence:
You are way overestimating the strength of your evidence, here; and I’m sorry, but this is not a subject I trust you to be rational about, because you clearly care far too much. There is a vast difference between “cold enough for cessation of brain activity” (not even below freezing!) and “liquid bloody nitrogen”; there is a difference between human brains and dog brains; there is a difference between 120 minutes and 120 years; there is a difference between the controlled conditions of a laboratory, and real-life accident or injury.
That said, this is a promising direction of research for convincing me. How’s this? If a dog is cooled below freezing, left there for 24 hours, and then revived, I will sign up for cryonics. Cross my heart and hope not to die.
If it turns out that cryonics as practised in 2010 works, then yes, I would be surprised. I would not be particularly suprised if a similar technology can be made to work in the future; I don’t object to the proposition that information is information and the brain is un-magical, only to the overconfidence in today’s methods of preserving that information. In any case, though, I can’t very well update on predicted future surprises, can I now?
Since you expect some future cryonics tech to be successful, there’s a strong argument that you should sign up now: you can expect to be frozen with the state of the art at the time of your brain death, not 2010 technology, and if you put it off, your window of opportunity may close.
Disclosure: I am not signed up for cryonics (but the discussion of the past few days has convinced me that I ought to).
I’m curious as to whether the upvotes are for the argument or just the disclosure. Transfer karma here to indicate upvotes just for the disclosure.
How high a probability do you place on the information content of the brain depending on maintaining electrochemical potentials? Why? Why do you think your information and analysis are better than those of those who disagree?
In order: 90%; because personality seems to me state-ful (that is, there is clearly some sort of long-term storage with quite rapid (relative to nerve growth) writing going on, which seems to me hard to explain purely in terms of the interconnections), and a neural network with no activation information in the nodes will not respond to a given input in the same way as the same network with some excited nodes; and because you have not given a convincing counterargument nor a convincing appeal to expertise.
Certainly the internal state of a neuron includes things that are preserved by uploading other than the wiring diagram. Anyway, are you doing a calculation where another factor of 10 makes a critical difference?
Uploading, yes; but we were discussing cryonics. Uploading is a completely different question. Indeed, I would assign a rather higher probability to uploading preserving personality, than to cryonics doing so.
And yes, I generally expect orders of magnitude to make a difference. If they don’t, then your uncertainty is so large anyway that attempting a fake precision is just fooling yourself.
Although… actually… it occurs to me that you could move the order of magnitude somewhere else. Suppose I kept your probability estimate of cryonics working, and multiplied the price by ten? Even by twenty? … That does make a pretty fair chunk of my budget, but still. I think I’ll have to revisit that calculation.
Not sure what exactly you mean by the “internal state of the nodes.” If you are referring to inside the individual brain cells, then I think you’re mistaken. We can already peer into the inside of neurons. Transmission electron microscopy is a powerful technology! Combine it with serial sectioning with a diamond knife and you can get quite a lot of detail in quite a large amount of tissue.
For example consider Ragsdale et al’s recent study, to pick the first sstem scopus result. They looked at some sensory neurons in C. elegans, and were able to identify not just internal receptors but also which cells (sheath cells) contain abundant endoplasmic reticulum, secretory granules, and/or lipid globules.
This whole discussion comes down to what level of scale separation you might need to recapitulate the function of the brain and the specific characteristics that make you you. Going down to say the atomic level would probably be very difficult, for instance. But there’s good reason to think that we won’t have to go nearly that far down to reproduce human characteristics. Have you read the pdf roadmap? No reason to form beliefs w/o the relevant knowledge! :)
You are responding to a point somewhat at angles to the one I made. Yes, we can learn a lot about the internal state of brain cells using modern technology. It does not follow that such state survives long-term storage at liquid-nitrogen temperatures.
Is it the immediate effects of the freezing process that trouble you or the long-term effects of staying frozen for years / decades / centuries?
Suppose your child dies. Afterward, everyone alive at the time of an unFriendly intelligence explosion plus the tiny handful signed up for cryonics (including your child), also dies. Would you say in retrospect that you’d been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?
I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.
I have no child; this is not coincidence. If I did have a kid you can damn well better believe that kid would be signed up for cryonics or I wouldn’t be able to sleep.
I’ll accept that excuse for your not being signed up yourself—though I’m rather skeptical until I see the donation receipt. I will not accept that excuse for your child not being signed up. I’ll accept it as an excuse for not having a child, but not as an excuse for having a child and then not signing them up for cryonics. Take it out of the movie budget, not the existential risks budget.
I don’t believe in excuses, I believe that signing up for cryonics is less rational than donating to prevent existential risks. For somewhat related reasons, I do not intend to have children.
Sounds like you could be in a consistent state of heroism, then. May I ask to which existential risk(s) you are currently donating?
I’m in the “amassing resources” phase at present. Part of the reason I’m on this site is to try and find out what organizations are worth donating to.
I am in no way a hero. I’m just a guy who did the math, and at least part of my motivation is selfish anyway.
I strongly advise you to immediately start donating something to somewhere, even if it’s $10/year to Methuselah. If there’s one thing you learn working in the nonprofit world, it’s that people who donated last year are likely to also donate this year, and people who last year planned to donate “next year” will this year be planning to donate “next year”.
Upon hearing this advice, I just donated $10 to SIAI, even though I consider this amount totally insignificant relative to my expected future donations. I will upvote anyone who does the same for any transhumanist charity.
Way to turn correlation-causality correlation into causality
Do you have an estimate of how much a new donor to SIAI is worth above and beyond their initial donation? How about given that I ask them to donate with money they were about to repay me anyway?
If it’s significant it could be well worth the social capital to spread your own donations among non-donor friends.
I plan to donate once I have X dollars of nonessential income, and yes, I have a specific value for X.
Did your calculations for X take into account discounting at 0-10%? Money for research years from now does much less good than money now.
No—thanks for the tip! I will adjust my calculations accordingly.
Or the cost of the research being delayed.
I figured that was covered by ‘much less good’; there are a lot of costs to delaying, if we wanted to enumerate them—risks of good charities going under, inflation and catastrophic economic events gnawing away at one’s stored value, the ever-present existential risks each year, etc.
Antiakrasia, future-self-influencing recommendation: if you can afford $10/year today, make sure your current level of giving is not zero.