This not being funded would indeed be very sad. Recently a story about the tragedy that is death touched a lot of people on LessWrong, I think me editing that article to link to here and encouraging people to donate would be an appropriate move.
What is it exactly that’s so valuable about a person that justifies spending $30000 worth of resources to preserve it? Their “identity”, whatever that means? Their personality, even though it’s probably a dime a dozen? Their acquired knowledge that will be outdated by the time they are revived? What is it that we want to preserve?
What is it that is lost when a person dies, that cannot be regained by creating a new one? I’m not in favor of creating new ones, but new ones are created all the time anyway, so why not learn to live with them? Why do we need to do everything the hard way?
First, we are selfish, and don’t want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don’t want them to die. Third, we want to spare everyone from having to be afraid of death.
I think if you forget about these reasons, then there’s no point in preserving people.
Edit: I’m sorry that your comment was downvoted, but I for one think that it’s a worthwhile objection to make, even though I disagree with it for the above reasons.
I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.
Rational doesn’t mean emotionless. These are emotional reasons—to which I think I should add that I care about the pain Joe’s loved ones feel when Joe dies—but I think they’re important emotional reasons. I wouldn’t be me if I didn’t care about these things.
I would not want to become “rational” at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
If I get rid of my desire to do something, then I’ve replaced myself by a possibly less frustrated person who doesn’t value the same things as I do. This is obviously a trade-off, yes.
On the one hand, it’s not that I’m ridiculously frustrated by our lack of immortality, I’ve kind of gotten used to it. I recognize that things could be better, yes.
On the other hand, a version of me that doesn’t care if people die or not seems very different from me and frankly kind of abhorrent. I don’t even know if I even want that version of me to exist, and I’m certainly not going to have it replace myself if I can help it.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it.
Beliefs are also something your brain generates. Being represented in meat doesn’t by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don’t approve of.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
the question here is exactly whether this fear of death that we all share is one of those emotions that we should value
Do you have specific ideas useful for resolving this question?
or if it is getting in the way of our rationality
It’s usually best to avoid using the word “rationality” in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like “Which answer to this question is truth?” or “Which belief about the answer to this question would be rational?”, or “Which belief about this question is desirable?”.
Do you have specific ideas useful for resolving this question?
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
It’s usually best to avoid using the word “rationality” in such contexts.
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
What of fundamental value was lost when Alice died that Betty’s birth did not replace?
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can’t explain what exactly is valuable in such situations, that doesn’t strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).
What is it that is lost when a person dies, that cannot be regained by creating a new one?
I’m uncertain about the value and fungibility of human life. Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument. On the other hand, my goals are sufficiently similar to everyone else’s goals that loss of my life wouldn’t prevent my goals from controlling the world, it will be done through others. Only existential disaster or severe value drift would prevent my goals from controlling the world.
(The negative response to your comment may be explained by the fact that you appear to be expressing confidence in the unusual solution (that value of life is low) to this difficult question without giving an argument for that position. At best the points you’ve made are arguments in support of uncertainty in the position that the value of life is very high, not strong enough to support the claim that it’s low. If your claim is that we shouldn’t be that certain, you should clarify by stating that more explicitly. If your claim is that the value of life is low, the argument your are making should be stronger, or else there is no point in insisting on that claim, even if that happens to be your position, since absent argument it won’t be successfully instilled in others.)
Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument.
I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don’t think the emotions in this case form that strong an argument. There’s a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.
Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular “person” continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.
Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?
Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don’t think you gave me anything but an appeal to emotion.
It’s different. The fact that I feel bad when confronted with my own mortality doesn’t mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.
The lack of interest in the prize puzzles me. Some very wealthy cryonicists want to tie up fortunes in speculative revival trusts, yet they depend on financially inadequate cryonics organizations to keep them in suspension against foreseeable adversities, and they seem uninterested in trying to improve the science of preserving their own brains. I don’t understand this business model.
I think the best esplanation for this behavior is that cryonics is essentially a religious funeral ritual.
Most people who get cryopreserved don’t really expect, at a deep level, that it will extend their life, much like most believers in traditional religions don’t really expect an afterlife in the otherworld or reincarnation (that’s why they all fear death and generally try to postpone it as much as possible).
Professing belief in the religious tenets and performing the required rituals may provide some emotional solace as long as willing suspension of disbelief (self-deception, if you prefer) can be maintained. That might explain the lackluster interest in a potentially falsifying experiment: should it turn out that preserved brains are manifestly damaged, maintaing suspension of disbelief would become much more difficult.
Another typical function of religious beliefs and rituals is social signalling: they are a way for a community (transhumanists, in the case of cryonics) to and maintain and reinforce social cohesion.
I think this hypothesis is worth bearing in mind. However, it doesn’t explain advancedatheist’s observation that wealthy cryonicists are eager to put a lot of money in revival trusts (whose odds of success are dubious, even if cryonics works) rather than donate to improve cryonics research or the financial viability of cryonics organizations.
The hypothesis “many people are engaging in cryonics as signalling/psychological-reassurance” is not incompatible with the hypothesis “there exist people interested in cryonics on a practical level, eager for potentially falsifying experiments”. Indeed, it’s even possible for both of these things to be true of a single person.
Many long-shot medical procedures serve similar functions—but this does not preclude them from being legitimate medical procedures. And there, too, I would expect a non-trivial subset of patients (and doctors) to be reluctant to seek out falsifying evidence.
There is likely some truth in your assertion that cryonics is fulfilling many of the same psychological and social functions of burial rituals—but that does not adequately explain all behavior in the cryonics arena.
I don’t know about CI’s due diligence. As the secretary of the Society for Venturism, which has the ability to raise money for Miss Suozzi’s suspension, I can confirm that we’ve pursued our end of checking out her story.
We helped out in getting William O’Rights cryosuspended a few years ago, for example:
Thank you! Could you publicize your confirmation? I believe there are a number of people willing to donate who were holding off until the story was confirmed. What is the best way to donate to the fund?
Some very wealthy cryonicists want to tie up fortunes in speculative revival trusts, yet they depend on financially inadequate cryonics organizations to keep them in suspension against foreseeable adversities, and they seem uninterested in trying to improve the science of preserving their own brains. I don’t understand this business model.
Think pyramids, only you don’t need thousands of slaves and a truly inconvenient amount of sandstone.
When do they need the money by? I’m currently a bit low on funds due to tuition and rent, but I can probably spare $500-1000 in a couple of weeks (I try to maintain an emergency fund, and don’t want to dip into that if I don’t need to).
Also, do you have a page I link to where they describe the immediate funding gap? If possible, I will turn this into a donation-matching thing, probably on /r/transhuman (leverage is always good).
I’m one of the volunteers at the Brain Preservation Foundation. Although sooner is always better, there isn’t a specific reason why now is better than 2 weeks in the future. If you need the money for an emergency fund, I’d wait to donate.
Thank you for that. It looks like I do indeed have a new job, so as soon as the paycheck comes in I will not need to dip into the emergency fund.
By the way, do you happen to know what would happen if the funding didn’t come about? Would you dip into the prize fund, or simply hold off on testing until you got sufficient funding.
As Bakker’s Prince of Nothing-verse books say (which I have lost the last 5 days or so to reading), “Measure is unceasing”.
These first tests, while perhaps the most valuable (as the initial observations of anything usually are), are—hopefully—only the first; if your donations do not change whether the first ones happen, they may change whether the second batch does.
(Even in the extraordinarily unlikely scenario where all the techniques produce perfect preservation according to the first test, one would still want periodic tests to check that the techniques are still being done right.)
That doesn’t answer the question of whether they need the money now or in two weeks. I dislike exposing myself to unnecessary financial risk, particularly at the beginning of a semester and when my work is undergoing layoffs (not that I consider either particularly likely to be a problem, but the risk is distinctly elevated right now and my reserves are lower than I like).
That doesn’t answer the question of whether they need the money now or in two weeks.
Oh, I didn’t realize you really meant now or a few weeks. As far as I know, there is no significant reason why donating now would be better than in a few weeks, aside from looking good here and maybe encouraging some other people to donate (for which a public commitment ought to be enough).
So, some anonymous person can give away 100k$ to back the prize, but not the 25-50k$ to fund the evaluation process needed to award the prize?
Why don’t they offer a 50k$ prize and use the other 50k$ to fund the evaluation, instead of soliciting donations?
Why don’t they just offer a medal of insignificant material value? The winner is going to get lots of bragging rights anyway.
“The Brain Preservation Foundation was incorporated in Delaware on August 27, 2010. We hold Section 501(c)(3) tax-exempt status as a not-for-profit scientific research organization. Your contributions are fully tax deductible. Thank you.”
According to them, approximately $25-50k—less than the cost of a single cryosuspension. This should be easily fundable with token contributions from people who have expressed an interest in brain preservation.
I’ve donated a relatively small amount, and will donate more when my finances allow (that’s not open-ended; I’m expecting a small windfall in a few months). It should go without saying, but if you have a good employer, check to see if they match charitable donations! Mine turned my donation from a pitifully small one into a just plain small one.
Wouldn’t it be weird if it turns out that there’s an excellent and durable method of preserving brains, but it’s not the one that’s been used for the last half-century or so? Horrifying, obviously, but profoundly weird as well. The two positions I’ve seen on the topic have been “it never has and never will work”, and “it’s worked since some possibly-specified time in the past”. A world in which people who sign up for preservation avoid death if and only if they’re lucky enough to have signed up after, say, 2030, feels weirder than a world where the rational are rewarded, the irrational punished. The zog, I suppose.
A world in which people who sign up for preservation avoid death if and only if they’re lucky enough to have signed up after, say, 2030, feels weirder than a world where the rational are rewarded, the irrational punished
It feels weirder, but has many precedents. Many ‘bubbles’ can be profitably interpreted as people being 100% correct about their vision of the future—but messing up the timing (see http://www.hoover.org/publications/policy-review/article/5646 and http://www.economist.com/news/finance-and-economics/21575737-lessons-americas-long-history-property-booms-betting-house for examples). I used this in another comment, but consider the case of an investor in the ill-fated Pets.com: was the investor right to believe that Americans would spend a ton of money online such as for buying dogfood? Absolutely, Amazon is a successful online retail business that stocks thousands of dog food varieties, to say nothing of all the other pet-related goods it sells. But the value of Pets.com still went to ~0. Many startups have a long list of failed predecessors who tried to do pretty much the same thing, and what made them a success was that they happened to give the pinata a whack at the exact moment where some cost curves or events hit the right point. (Facebook is the biggest archive of photographs there has ever been, with truly colossal storage requirements; could it have succeeded in the 1990s? No, and not even later, as demonstrated by Orkut & Friendster, and the lingering death of MySpace.) You can read books from the past about tech visionaries and note how many of them were spot-on in their beliefs about what would happen (The Media Lab was a good example of this—I read it constantly thinking ‘yes, you were right, for all the good it did you’ or ‘not quite, it’d actually take another decade for that to really work out’) but where a person would have been ill-advised to act on the correct forecasts. Or look at computers: imagine an early adopter of an Apple computer saying ‘everyone will use computers eventually!’ Yes, but not for another few decades, and ‘in the long run, we are all dead’.
If cryonics turned out to be worthless for everyone doing it before 2030 while perfectly correct in principle and practical post-2030, it would simply be yet another technology where visionaries were ultimately right despite all nay-saying & skepticism from normals but nevertheless jumped on it too early.
When a knife drops, a fraction of a second divides a brilliant save from an emergency-room visit. They don’t call it the ‘bleeding edge’ for nothing.
Well, a number of things have gone not-as-planned, but it did help to make a public commitment here, and I’ve (finally!) donated an order of magnitude more than I did last year, along with the corresponding employer match. Last year’s donation drive is over, but I expect they’ll still have science to do.
I’m one of the 9.
As am I… though I am surprised they list a donation of only $100.
This deserves recognition, up voted.
This post inspired me to make a small donation.
Me as well.
I donated a small amount
This not being funded would indeed be very sad. Recently a story about the tragedy that is death touched a lot of people on LessWrong, I think me editing that article to link to here and encouraging people to donate would be an appropriate move.
This is appreciated; Thank you.
What is it exactly that’s so valuable about a person that justifies spending $30000 worth of resources to preserve it? Their “identity”, whatever that means? Their personality, even though it’s probably a dime a dozen? Their acquired knowledge that will be outdated by the time they are revived? What is it that we want to preserve?
What is it that is lost when a person dies, that cannot be regained by creating a new one? I’m not in favor of creating new ones, but new ones are created all the time anyway, so why not learn to live with them? Why do we need to do everything the hard way?
First, we are selfish, and don’t want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don’t want them to die. Third, we want to spare everyone from having to be afraid of death.
I think if you forget about these reasons, then there’s no point in preserving people.
Edit: I’m sorry that your comment was downvoted, but I for one think that it’s a worthwhile objection to make, even though I disagree with it for the above reasons.
I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.
Rational doesn’t mean emotionless. These are emotional reasons—to which I think I should add that I care about the pain Joe’s loved ones feel when Joe dies—but I think they’re important emotional reasons. I wouldn’t be me if I didn’t care about these things.
I would not want to become “rational” at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
If I get rid of my desire to do something, then I’ve replaced myself by a possibly less frustrated person who doesn’t value the same things as I do. This is obviously a trade-off, yes.
On the one hand, it’s not that I’m ridiculously frustrated by our lack of immortality, I’ve kind of gotten used to it. I recognize that things could be better, yes.
On the other hand, a version of me that doesn’t care if people die or not seems very different from me and frankly kind of abhorrent. I don’t even know if I even want that version of me to exist, and I’m certainly not going to have it replace myself if I can help it.
Beliefs are also something your brain generates. Being represented in meat doesn’t by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don’t approve of.
See Feeling Rational, The Mystery of the Haunted Rationalist, Summary of “The Straw Vulcan”.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
Do you have specific ideas useful for resolving this question?
It’s usually best to avoid using the word “rationality” in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like “Which answer to this question is truth?” or “Which belief about the answer to this question would be rational?”, or “Which belief about this question is desirable?”.
See What Do We Mean By “Rationality”?, Avoid inflationary use of terms.
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can’t explain what exactly is valuable in such situations, that doesn’t strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).
If it’s irrational not to want to die, what do you think it would be rational to want?
Pleasurable experiences. My life facilitates them, but it doesn’t have to be “my” life. Anyone’s life will do.
And why do you think it’s rational to want this, but not to want one’s own survival?
Because it feels good. My ongoing survival leaves me cold entirely.
How would you distinguish this, as a “rational” reason, from “emotional” reasons, as you did in your previous comment?
Then wireheading is the best solution. The interesting fact is that wireheading anyone else would give you as much utility as wireheading you.
I’m uncertain about the value and fungibility of human life. Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument. On the other hand, my goals are sufficiently similar to everyone else’s goals that loss of my life wouldn’t prevent my goals from controlling the world, it will be done through others. Only existential disaster or severe value drift would prevent my goals from controlling the world.
(The negative response to your comment may be explained by the fact that you appear to be expressing confidence in the unusual solution (that value of life is low) to this difficult question without giving an argument for that position. At best the points you’ve made are arguments in support of uncertainty in the position that the value of life is very high, not strong enough to support the claim that it’s low. If your claim is that we shouldn’t be that certain, you should clarify by stating that more explicitly. If your claim is that the value of life is low, the argument your are making should be stronger, or else there is no point in insisting on that claim, even if that happens to be your position, since absent argument it won’t be successfully instilled in others.)
I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don’t think the emotions in this case form that strong an argument. There’s a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.
Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular “person” continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.
If you go to a really high place, and look over the edge far enough, you’ll find out.
Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?
Don’t feed the troll.
Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don’t think you gave me anything but an appeal to emotion.
Arguing we should seek pleasurable experiences is also an appeal to emotion.
It’s different. The fact that I feel bad when confronted with my own mortality doesn’t mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.
I’m curious. What is your position on wireheading?
I’d like to donate, but at the moment I may have to direct discretionary time and money towards saving Kim Suozzi:
http://www.reddit.com/r/atheism/comments/ydsy5/reddit_help_me_find_some_peace_in_dying_young_im/
The lack of interest in the prize puzzles me. Some very wealthy cryonicists want to tie up fortunes in speculative revival trusts, yet they depend on financially inadequate cryonics organizations to keep them in suspension against foreseeable adversities, and they seem uninterested in trying to improve the science of preserving their own brains. I don’t understand this business model.
Why would you be puzzled when you have answered your own question?
Behavior like that has deepened my skepticism of the cryonics crowd—there are glaring discrepancies between professed beliefs and actual behavior.
Prisoner’s dilemma. If someone else donates and I don’t, I get to eat my cryopreservation and have it too. Or something like that.
At least this thread has rustled up a few more donations.
I think the best esplanation for this behavior is that cryonics is essentially a religious funeral ritual.
Most people who get cryopreserved don’t really expect, at a deep level, that it will extend their life, much like most believers in traditional religions don’t really expect an afterlife in the otherworld or reincarnation (that’s why they all fear death and generally try to postpone it as much as possible).
Professing belief in the religious tenets and performing the required rituals may provide some emotional solace as long as willing suspension of disbelief (self-deception, if you prefer) can be maintained. That might explain the lackluster interest in a potentially falsifying experiment: should it turn out that preserved brains are manifestly damaged, maintaing suspension of disbelief would become much more difficult.
Another typical function of religious beliefs and rituals is social signalling: they are a way for a community (transhumanists, in the case of cryonics) to and maintain and reinforce social cohesion.
I think this hypothesis is worth bearing in mind. However, it doesn’t explain advancedatheist’s observation that wealthy cryonicists are eager to put a lot of money in revival trusts (whose odds of success are dubious, even if cryonics works) rather than donate to improve cryonics research or the financial viability of cryonics organizations.
Maybe it’s something like the Egyptian pharaohs putting gold and valuables in their pyramids
The hypothesis “many people are engaging in cryonics as signalling/psychological-reassurance” is not incompatible with the hypothesis “there exist people interested in cryonics on a practical level, eager for potentially falsifying experiments”. Indeed, it’s even possible for both of these things to be true of a single person.
Many long-shot medical procedures serve similar functions—but this does not preclude them from being legitimate medical procedures. And there, too, I would expect a non-trivial subset of patients (and doctors) to be reluctant to seek out falsifying evidence.
There is likely some truth in your assertion that cryonics is fulfilling many of the same psychological and social functions of burial rituals—but that does not adequately explain all behavior in the cryonics arena.
Has Suozzi’s story been confirmed by CI yet?
I don’t know about CI’s due diligence. As the secretary of the Society for Venturism, which has the ability to raise money for Miss Suozzi’s suspension, I can confirm that we’ve pursued our end of checking out her story.
We helped out in getting William O’Rights cryosuspended a few years ago, for example:
http://www.cryonics.org/reports/CI93.html
One of our directors has interviewed Miss Suozzi, and she may have a article about her written up soon which we’ll post on the Venturists’ website:
http://venturist.info/
Just donated to Kim’s fund.
Do you have any plans to manage a donation fund for her?
Thank you! Could you publicize your confirmation? I believe there are a number of people willing to donate who were holding off until the story was confirmed. What is the best way to donate to the fund?
saving?
Maybe they don’t want to spread the flour on the dragon
Think pyramids, only you don’t need thousands of slaves and a truly inconvenient amount of sandstone.
When do they need the money by? I’m currently a bit low on funds due to tuition and rent, but I can probably spare $500-1000 in a couple of weeks (I try to maintain an emergency fund, and don’t want to dip into that if I don’t need to).
Also, do you have a page I link to where they describe the immediate funding gap? If possible, I will turn this into a donation-matching thing, probably on /r/transhuman (leverage is always good).
I’m one of the volunteers at the Brain Preservation Foundation. Although sooner is always better, there isn’t a specific reason why now is better than 2 weeks in the future. If you need the money for an emergency fund, I’d wait to donate.
The page describing the need for our current fundraising campaign is here: http://www.brainpreservation.org/content/letter-president-brain-preservation-foundation
Thank you for that. It looks like I do indeed have a new job, so as soon as the paycheck comes in I will not need to dip into the emergency fund.
By the way, do you happen to know what would happen if the funding didn’t come about? Would you dip into the prize fund, or simply hold off on testing until you got sufficient funding.
I can’t speak authoritatively, but I think testing would have to be put on hold.
The $100,000 for the prize fund is pledged specifically for when the prize is won, so there’s no easy way to change that.
As Bakker’s Prince of Nothing-verse books say (which I have lost the last 5 days or so to reading), “Measure is unceasing”.
These first tests, while perhaps the most valuable (as the initial observations of anything usually are), are—hopefully—only the first; if your donations do not change whether the first ones happen, they may change whether the second batch does.
(Even in the extraordinarily unlikely scenario where all the techniques produce perfect preservation according to the first test, one would still want periodic tests to check that the techniques are still being done right.)
That doesn’t answer the question of whether they need the money now or in two weeks. I dislike exposing myself to unnecessary financial risk, particularly at the beginning of a semester and when my work is undergoing layoffs (not that I consider either particularly likely to be a problem, but the risk is distinctly elevated right now and my reserves are lower than I like).
Oh, I didn’t realize you really meant now or a few weeks. As far as I know, there is no significant reason why donating now would be better than in a few weeks, aside from looking good here and maybe encouraging some other people to donate (for which a public commitment ought to be enough).
So, some anonymous person can give away 100k$ to back the prize, but not the 25-50k$ to fund the evaluation process needed to award the prize?
Why don’t they offer a 50k$ prize and use the other 50k$ to fund the evaluation, instead of soliciting donations? Why don’t they just offer a medal of insignificant material value? The winner is going to get lots of bragging rights anyway.
Does anyone here know if BPF is a 501(c)(3) organization? If so, I can probably get some of my donation matched by my employer.
“The Brain Preservation Foundation was incorporated in Delaware on August 27, 2010. We hold Section 501(c)(3) tax-exempt status as a not-for-profit scientific research organization. Your contributions are fully tax deductible. Thank you.”
http://www.brainpreservation.org/content/donate
Thanks—I’m not sure why I didn’t see that before. I’ve now requested a match, for an additional $956.
It took me googling “tax exempt site:brainpreservation.org″ to find that, so it may not just be you.
Yeah, same here. Their site could use some work.
How much money do they need?
According to them, approximately $25-50k—less than the cost of a single cryosuspension. This should be easily fundable with token contributions from people who have expressed an interest in brain preservation.
I’ve donated a relatively small amount, and will donate more when my finances allow (that’s not open-ended; I’m expecting a small windfall in a few months). It should go without saying, but if you have a good employer, check to see if they match charitable donations! Mine turned my donation from a pitifully small one into a just plain small one.
Wouldn’t it be weird if it turns out that there’s an excellent and durable method of preserving brains, but it’s not the one that’s been used for the last half-century or so? Horrifying, obviously, but profoundly weird as well. The two positions I’ve seen on the topic have been “it never has and never will work”, and “it’s worked since some possibly-specified time in the past”. A world in which people who sign up for preservation avoid death if and only if they’re lucky enough to have signed up after, say, 2030, feels weirder than a world where the rational are rewarded, the irrational punished. The zog, I suppose.
It feels weirder, but has many precedents. Many ‘bubbles’ can be profitably interpreted as people being 100% correct about their vision of the future—but messing up the timing (see http://www.hoover.org/publications/policy-review/article/5646 and http://www.economist.com/news/finance-and-economics/21575737-lessons-americas-long-history-property-booms-betting-house for examples). I used this in another comment, but consider the case of an investor in the ill-fated Pets.com: was the investor right to believe that Americans would spend a ton of money online such as for buying dogfood? Absolutely, Amazon is a successful online retail business that stocks thousands of dog food varieties, to say nothing of all the other pet-related goods it sells. But the value of Pets.com still went to ~0. Many startups have a long list of failed predecessors who tried to do pretty much the same thing, and what made them a success was that they happened to give the pinata a whack at the exact moment where some cost curves or events hit the right point. (Facebook is the biggest archive of photographs there has ever been, with truly colossal storage requirements; could it have succeeded in the 1990s? No, and not even later, as demonstrated by Orkut & Friendster, and the lingering death of MySpace.) You can read books from the past about tech visionaries and note how many of them were spot-on in their beliefs about what would happen (The Media Lab was a good example of this—I read it constantly thinking ‘yes, you were right, for all the good it did you’ or ‘not quite, it’d actually take another decade for that to really work out’) but where a person would have been ill-advised to act on the correct forecasts. Or look at computers: imagine an early adopter of an Apple computer saying ‘everyone will use computers eventually!’ Yes, but not for another few decades, and ‘in the long run, we are all dead’.
If cryonics turned out to be worthless for everyone doing it before 2030 while perfectly correct in principle and practical post-2030, it would simply be yet another technology where visionaries were ultimately right despite all nay-saying & skepticism from normals but nevertheless jumped on it too early.
When a knife drops, a fraction of a second divides a brilliant save from an emergency-room visit. They don’t call it the ‘bleeding edge’ for nothing.
Wow; that just reminded me of a bit from The Smartest Guys In The Room, where Enron partnered with Blockbuster to stream movies-on-demand over the internet in 2000. It was a scam, but clearly someone thought it was a real thing. (Netflix started streaming movies in 2007.)
And—yes, you said it. Projects like this and OpenWorm are particularly important because they help narrow down really uncertain things; OpenWorm, for instance, might be able to settle the “neurons are really complicated”/”neurons are accurately simulatable-in-bulk by simple models” dispute, as well as the “the connectome is/is not sufficient” thing.
Well, a number of things have gone not-as-planned, but it did help to make a public commitment here, and I’ve (finally!) donated an order of magnitude more than I did last year, along with the corresponding employer match. Last year’s donation drive is over, but I expect they’ll still have science to do.
I look forward to seeing the results.
Donated $100 dollars. I think there are 26 total donors now.