I haven’t yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don’t revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates—and in particular, many of those whose comments I’ve read on Overcoming Bias and here—make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you’ll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your “normal” self that you expect to be alive tomorrow.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn’t also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don’t feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.
In particular, I personally can’t bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I’m sure that people reject cryonics not because they’re too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don’t see how this different subjective preference can be considered “irrational” in any way.
That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people’s strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.
Would it change your mind if you discovered that you’re living in a simulation right now?
In particular, I personally can’t bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time.
Would it change your mind if you discovered that you’re living in a simulation right now?
It would probably depend on the exact nature of the evidence that would support this discovery. I allow for the possibility that some sorts of hypothetical experiences and insights that would have the result of convincing me that we live in a simulation would also have the effect of dramatically changing my intuitions about the question of personal identity. However, mere thought-experiment considerations of those I can imagine presently fail to produce any such change.
I also allow for the possibility that this is due to the limitations of my imagination and reasoning, perhaps caused by unidentified biases, and that actual exposure to some hypothetical (and presently counterfactual) evidence that I’ve already thought about could perhaps have a different effect on me than I presently expect it would.
For full disclosure, I should add that I see some deeper problems with the simulation argument that I don’t think are addressed in a satisfactory manner in the treatments of the subject I’ve seen so far, but that’s a whole different can of worms.
Well, a concrete scenario would be that the simulators calmly reveal themselves to you and demonstrate that they can break the laws of physics, for example by just wiggling the sun around in the sky, disconnecting your limbs without blood coming out or pain, making you float, etc.
That would fall under the “evidence that I’ve already thought about” mentioned above. My intuitions would undoubtedly be shaken and moved, perhaps in directions that I presently can’t even imagine. However, ultimately, I think I would be led to conclude that the whole concept of “oneself” is fundamentally incoherent, and that the inclination to hold any future entity or entities in special regard as “one’s future self” is just a subjective whim. (See also my replies to kodos96 in this thread.)
I’m not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?
Well right, obviously a program consiting of “printf(“I am Vladmir_M”)” wouldn’t qualify… but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.
No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it’s me.
I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the “fading qualia” argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don’t see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.
If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like “yourself”, how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I’m just curious what you think, since I’ve never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)
For the robotic “me”—though not for anyone else—this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we’re pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.
Therefore, my answer would be that I don’t know how exactly the subjective intuitions and convictions of the robotic “me” would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don’t think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.
Of course, I am aware that a similar argument can be applied to the “normal me” who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it’s a matter of strong subjective preferences.
Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?
Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.
While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn’t identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don’t buy into that argument. If they don’t, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.
Your point about weirdness signaling is a good one, and I’d expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.
While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don’t buy into that argument. If they don’t, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.
The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject’s identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan’s principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point.
In any case, I honestly don’t see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.
This seems similar to something that I’ll arbitrarily decide to call the ‘argument from arbitrariness’: every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I’d be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.
In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can’t. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.
Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: “’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.” I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.
I don’t mean to insult you (I’m trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You’re admitting to false beliefs that you hold “because you evolved that way” rather than using reason to reconcile two intuitions that you “sort of follow” but which contradict each other.
Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can’t be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one’s hands but your own.
Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted “I think therefore I am” as one basis for a stable starting point.
But starting from that idea and a handful of others like “trust of our own memories as a sound basis for induction” we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time—one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our “initial epistemic conditions”. The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.
Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now… unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost “continuity of awareness” in the middle because your brain will go into a repair and update mode that’s not capable of sensing your environment or continuing to compute “continuity of awareness”.
If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.
Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you’re mostly focused on your “contextual value” (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).
The real thing to which you should be paying attention (other than to make sure they don’t stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.
For the record, I don’t have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.
Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I’ll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.
When I set up a “drake equation for cryonics” and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn’t even have terms for negative value outcomes like “loss of value in ‘some other context’ because of cryonics/simulationist interactions”.
So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.
Hi Jennifer. Perhaps I seem irrational because you haven’t understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.
No doubt I explained myself poorly on the subject of the continuity of the self. I won’t dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.
I don’t understand why psychological continuity isn’t enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don’t see why that matters.)
Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don’t care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.
We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.
Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.
Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values.
If a proposed intrinsic value is questioned and justified with another value statement, then the supposed “intrinsic value” is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of “simple” fact. And you are quite right that these facts (by definition as “non value statements”) will not be motivating.
We fundamentally like vanilla (if we do) “because we like vanilla” as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P
On the other hand… basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create “flow” where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on.
As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that “I don’t care what I will experience tomorrow” can be interpreted as a prediction that “Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions”. This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professional help if you want to live very much longer (which of course you might not if you’re really experiencing anhedonia), or else you are a non human intelligence and I have to start from scratch trying to figure you out.
Taking it as granted that you and I can both safely predict that you will continue to enjoy life tomorrow… then an inductive proof can be developed that “unless something important changes from one day to the next” you will continue to have a stake in the day after that, and the day after that, and so on. When people normally discuss cryonics and long term values it is the “something important changing” issue that they bring up.
For example, many people think that they only care about their children… until they start seeing their grandchildren as real human beings whose happiness they have a stake in, and in whose lives they might be productively involved.
Other people can’t (yet) imagine not falling prey to senescence, and legitimately think that death might be preferable to a life filled with pain which imposes costs (and no real benefits) on their loved ones who would care for them. In this case the critical insight is that not just death but also physical decline can be thought of as a potentially treatable condition and so we can stipulate not just vastly extended life but vastly extended youth.
But you are not making any of these points so that they can even be objected to by myself or others… You’re deploying the kind of arguments I would expect from an undergrad philosophy major engaged in motivated cognition because you have not yet “learned how to lose an argument gracefully and become smarter by doing so”.
And it is for this reason that I stand by the conclusion that in some cases beliefs about cryonics say very much about the level of pragmatic philosophical sophistication (or “rationality”) that a person has cultivated up to the point when they stake out one of the more “normal” anti-cryonics positions. In your case, you are failing in a way I find particularly tragic, because normal people raise much better objections than you are raising—issues that really address the meat of the matter. You, on the other hand, are raising little more than philosophical confusion in defense of your position :-(
Again, I intend these statements only in the hope that they help you and/or audiences who may be silently identifying with your position. Most people make bad arguments sometimes and that doesn’t make them bad people—in fact, it helps them get stronger and learn more. You are a good and valuable person even if you have made comments here that reveal less depth of thinking than might be hypothetically possible.
That you are persisting in your position is a good sign, because you’re clearly already pretty deep into the cultivation of rationality (your arguments clearly borrow a lot from previous study) to the point that you may harm yourself if you don’t push through to the point where your rationality starts paying dividends. Continued discussion is good practice for this.
On the other hand, I have limited time and limited resources and I can’t afford to spend any more on this line of conversation. I wish you good luck on your journey, perhaps one day in the very far future we will meet again for conversation, and memory of this interaction will provide a bit of amusement at how hopelessly naive we both were in our misspent “childhood” :-)
Why is psychological continuity important? (I can see that it’s very important for an identity to have psychological continuity, but I don’t see the intrinsic value of an identity existing if it is promised to have psychological continuity.)
In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don’t sense any responsibility to care about a future self that needn’t exist. On the contrary, if this person has no effect on anything that matters to me, I’d rather be free of being responsible for this future self.
In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don’t have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.
Less unique, I think, though perhaps not generally realized, is the fact that I don’t feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn’t be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.
He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn’t seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.
Interesting. However, I don’t see any problems with the nature of personal identity. My hunch is that I’m actually not confused about it.
In a lifetime, there is continuity of memories and continuity of values and goals even as they slowly change over time. I can trust that the person who wakes up tomorrow will be ‘me’ in this sense. She may be more refreshed and have more information, but I trust her her to act as “I” would. On the other hand, she might be excessively grouchy or suffer a brain injury, in which case this trust is misplaced. However, she is not me personal-identity wise for a variety of reasons:
I do not have access to her stream of consciousness.
I do not have operative control of her body.
[In both cases, the reason is because her thoughts and actions take place in the future. Eventually, I will have access to her thoughts and control of her body and then she becomes “me”.]
Personal identity exists only for a moment. It is the running of some type of mental thought process.
Suppose I was duplicated overnight, and two byrnemas woke up in the morning. Both byrnemas would have continuity with the previous byrnema with respect to memories, values and goals. However, neither of them are the personal identity of byrnema of the night before just as whenever I wake up I’m not the personal identity of the night before, exactly for the reasons I bulleted.
With the two duplicates, there would be two distinct personal identities. You simply count the number of independent accesses to thoughts and motor control of bodies and arrive at two. Both byrnema have a subjective experience of personal identity, of course, and consider the other byrnema an “other”. However, this “other” is similar to oneself in a way that is unprecedented, a twin sister that also has your memories, goals and values.
I think duplicates would be most problematic for loved ones. They would find themselves in a position of loving both duplicates, and being able to empathize with both, but not really caring so much if one was deleted, but being very distraught if both were deleted. That would be strange, because we haven’t had any experience with that, but I’m sure we would adjust well enough.
People would take risks with their person, but only after checking and double-checking that their backup was recent and well. People wouldn’t care if their person died—they would understand (now through experience rather than introspection) that what makes them them is their memories, values, goals and a moment. And the moment is transient anyway. The illusion of self existing for more than a moment would be broken.
The post you linked to by Ben Best mentioned the impossibility of a personal identity in two different physical locations. Actually, interestingly, it would be possible to have an identity in two physical locations. To do this, you would need to stream the sensory data of two bodies into a single brain, located anywhere. As long as the brain had access to both bodies’ sensory data, and could operate both bodies, and there was a single shared stream of consciousness, then that person would be physically located in two places at once. (But this is completely different from just duplicating a person.)
If you care about a person, then while you might not care as much if a recent duplicate or a recently duplicated person were lost, you would still care about as much if either of them suffers..
As is implied by my ‘recently’, the two will diverge, and you might end up with loyalty to both as distinct individuals, or with a preference for one of them.
Also, I don’t think parents value each of newborn twins less because they have a spare.
For the record, I don’t have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress
I’m in the signing process right now, and I wanted to comment on the “work in progress” aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true.
The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy).
So yeah, the term “working on it” is not correctly applicable to this situation. Someone who’s never climbed a flight of stairs may work out for months in preparation, but they really don’t need to, and afterwards might be somewhat annoyed that no one who’d climbed stairs before had bothered to tell them so.
Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked “What’s CI?” that it’s a place that’ll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact—on a daily basis—with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don’t die, and realize that you never would have.
The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life.
SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don’t currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up.
Secondarily, there are similarly complex social issues that come up because I’m married, love my family, am able to have philosophical conversations them, and don’t want to “succeed” at cryonics but then wake up for 1000 years of guilt that I didn’t help my family “win” too. If they don’t also win, when I could have helped them, then what kind of a daughter or sister would I be?
Finally, I’ve worked on a personal version of a “drake equation for cryonics” and it honestly wasn’t a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family—given that they are generally rational in their own personal ways :-)
Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may “stick” for the next forty years. Against this I weigh the issue of “best being the enemy of good” because (in point of fact) I’m not safe in any way at all right now… which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that “sloppiness calibration” over to the rest of my life?
So, yeah, its a work in progress.
I’m pretty much not afraid of the social issues that you brought up. If people who disagree with me about the state of the world want to judge me, that’s their problem up until they start trying to sanction me or spread malicious gossip that blocks other avenues of self improvement or success. The judgment of strangers who I’ll never see again is mostly a practical issue and not that relevant compared to relationships that really matter, like those with my husband, nuclear family, friends, personal physician, and so on.
Back in 1999 I examined these issues. In 2004 I got to the point of having all the paperwork to sign and turn in with Alcor and Insurance, with all costs pre-specified. In each case I backed off because I calculated the costs and looked at my income and looked at the things I’d need to cut out of my life (and none of it was coffee from starbucks or philanthropy or other fluffy BS like that—it was more like the simple quality of my food and whether I’d be able to afford one bedroom vs half a bedroom) and they honestly didn’t seem to be worth it. As I’ve gotten older and richer and more influential (and partly due to influence from this community) I’ve decided I should review the decision again.
The hard part for me is dotting the i’s and crossing the t’s (and trying to figure out where its safe to skip some of these steps) while seeking to minimize future regrets and maximize positive outcomes.
don’t want to “succeed” at cryonics but then wake up for 1000 years of guilt that I didn’t help my family “win” too. If they don’t also win, when I could have helped them, then what kind of a daughter or sister would I be?
You can’t hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can’t view yourself as the primary source for their actions.
It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening.
The question is: to what degree is failing to sign up for cryonics like suicide by negligence?
I used State Farm, because I’ve had car insurance with them since I could drive, and renters/owner’s insurance since I moved out on my own. I had discounts both for multi-line and loyalty.
Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax.
Heck, my agent had to do much more work than I did, previous to this she didn’t know that you can designate someone other than yourself as the owner of the policy, required some training.
I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn’t complete it for me. That was spooky. I don’t want to do that.
Come to think of it, I didn’t even bother asking what the blood sample was for. But I tend to be exceptionally un-private. I don’t expect privacy to be a part of life among beings who regularly share their source code.
It’s not a matter of privacy. I can’t think of much they’d put on the list that I wouldn’t be willing to let them have. (The agent acted like I could only possibly be worried that they were going to do genetic testing, but I’d let them do that as long as they, you know, told me, and gave me a copy of the results.) It was just really not okay with me that they wanted it for undisclosed purposes. Lack of privacy and secrets shouldn’t be unilateral.
Disagree. What’s this trivially easy part? You can’t buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you’ll have to get a medical exam, but still...)
Of course, in fairness, I’m trying to combine it with “infinite banking” by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don’t want to limit the policy to a specific term, risking that you’ll die afterward and no be able to afford the preservation, when the take-off hasn’t happened.)
I would think whole life would make more sense than term anyway
Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you’ll end up way ahead.
Yes, I’m intimately familiar with the argument. And while I’m not committed to whole life, this particular point is extremely unpersuasive to me.
For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.
That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free.
If you “buy term and invest the difference”, you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you’re ~60. The optimistic “long term” returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in ’08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds—and especially not after taxes.
Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries’ fiscal situations, is looking more likely every day), they’ll most likely be hit before life insurance policies.
So yes, I’m aware of the argument, but there’s a lot about the calculation that people miss.
It’s really hard to understand insurance products with the information available on the internet, and you are right that it is extremely unfriendly to online research. When I investigated whole life vs. term a few years ago, I came to the conclusions that there are a lot of problems with whole life and I wouldn’t touch it with a ten foot pole.
For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.
Actually, there is something far weirder and insidious going on. By “extra cost,” I assume you are referring to the extra premium that goes into the insurance company’s cash value investment account, beyond the amount of premium that goes towards your death benefit (aka “face amount,” aka “what the insurance company pays to your beneficiary if you die while the policy is in force). Wait, what? Didn’t I mean your cash value account, and my words the “insurance company’s cash value account” were a slip of the tongue? Read on...
Let’s take a look at the FAQ of the NY Dept. of Insurance which explains the difference between the face amount of your policy (aka “death benefit” aka “what the insurance company pays to your beneficiary if you die while the policy is in force):
The face amount is the amount of coverage you wish to provide your beneficiaries in the event of death. The cash value is the value that builds up in the policy. The minimum cash values are set by the Insurance Law and reflect an accumulation of your premiums after allowances for company expenses and claims. When you are young, your premiums are more than the cost of insuring your life at that time. Over time the cash value grows, usually tax-deferred, and the owner may be allowed access to that money in the form of a policy loan or payment of the cash value. The face amount of your policy will be higher than your cash value especially in the early years of your policy. If you surrender your policy you will receive the cash value not the face amount. If you die your beneficiaries will receive the face amount.
So, you have a $1 million face amount insurance policy. The premiums are set so that by age 100, “your” cash value investment account will have a value of $1 million. If you die right before turning 100, how much money will your beneficiary get?
If you guessed $1 million face amount + $1 million cash value account = $2 million, you guessed wrong. See the last quoted sentence: “If you die your beneficiaries will receive the face amount.” Your beneficiary gets the $1 million face amount, but the insurance company keeps the $1 million investment account to offset their loss (which would instead go to your beneficiary if you had done “buy term and invest the difference).
This is because the cash value account is not your money anymore. The account belongs to the insurance company; I’ve read whole life policies and seen this stated in the fine print that people don’t read. Now, you may think you can access this account, right? Yes and no. It’s true that the money in it grows tax-free, but getting your money from the account isn’t as simple as you might think.
You can’t just take money out of a cash value account. If you want to take money out of the cash value account without surrendering the entire policy, it is not usually a withdrawal, it’s a loan.The reason it’s called a “loan” is because, as we’ve established, the account is not really yours, it’s the insurance company’s! According to the FAQ, here is what happens when you try to take a loan on a cash value account (emphasis mine):
There may be a waiting period of up to three years before a loan is available. If the policy owner borrows from the policy, the cash value is used as collateral, and interest is charged at the rate specified or described in the policy. Any money owed on an outstanding policy loan is deducted from the benefits upon the insured’s death or from the cash value if the policy owner surrenders the policy for cash.
As it says, you can get the money out of the cash value account by surrendering your policy… but then you have no life insurance anymore (whereas with buy term and invest the difference, taking money out of an investment account may incur taxes if they are not already paid, but you don’t have to cancel your life insurance to do so). See the penultimate sentence of the first quote: “If you surrender your policy you will receive the cash value not the face amount.” Your coverage (the “face amount”) is gone if you surrender your policy to get the cash values. Here is what happens when you surrender the policy:
When a policy is surrendered, the owner is entitled to at least a portion of the cash value. The actual amount that the owner receives will depend whether there are any outstanding loans or unpaid premiums that can be deducted from the cash value.
With “buy term and invest the difference,” if you take money out of your investment account, it doesn’t decrease the death benefit of your policy. Another article claims that you can do a partial withdrawal from the cash value account without it being a loan, but it can decrease the death benefit:
You can also make a full or partial withdrawal of your cash value. Depending on your policy and level of cash value, a withdrawal might reduce your death benefit. Exactly how much varies by policy, but in the case of universal life insurance your death benefit would be reduced on a dollar-for-dollar basis. For example, if you had a $100,000 death benefit with a $20,000 cash value and you withdrew $10,000, your resulting death benefit would be $90,000.
In some cases, partially withdrawing your cash value could decimate your death benefit. For some traditional whole life insurance policies, the death benefit could be reduced by more than the amount you withdraw.
The cash value surrender values will be spelled out in a schedule in a whole life contract. And for the first 3-5 years, they can be dismal (and would be less than if you had invested the difference and withdrew it paying taxes). From the insure.com article (emphasis mine):
Also important to note is the fluctuating rate of return on cash value in this particular whole life insurance policy. Your first year’s premium disappears into fees and expenses without a penny into your cash value account. Only at year 4 does the cash value rate of return go positive. That means if you drop this policy within the first few years, you’ve made a terrible investment.
[...]
The chart at the right summarizes the estimated average rate of return if you kept this particular life insurance policy 5, 10, 15 or 20 years. Even if you held this policy for 10 years, your estimated cash value average rate of return works out to only 2 percent because you’re still making up ground for those expensive first few years. You should be prepared to hold a whole life insurance policy for the long haul in order to make a potentially good investment.
That whole article is a good read. Notice that even though a cash value account can match a “buy term and invest the difference” strategy that accumulates 4.6% a year, your beneficiary does not get the cash value investment if you die:
You may be looking at this example and adding up cash value plus death benefit, but remember: With ordinary whole life insurance policies like this one, your beneficiaries do not receive the cash value when you die; they receive only the death benefit.
So if you die with the cash value account, your beneficiary gets $100,000, but if you die with the term strategy, your beneficiary gets $100,000 + the value of the investment account. If you die in year 20, that is $28,000 (don’t know if those dollars are taxed yet or not, but the difference is still stark), making the total gain by your beneficiary $128,000, instead of $100,000 with whole life.
So, what’s the deal with cash value accounts and why are they so wacky? To understand, realize that the cash value account is not an investment vehicle for you; it is a protection for the insurance company. From this article:
Whole life was the name of the original policy form. Premiums were payable for the whole of life (hence the name) or some shorter premium paying period (e.g., to age 65 or for 20 years). Regardless of the premium paying period, a guaranteed cash value was built up such that, at the terminal age of the policy (typically age 95 or 100), the cash value would equal the face amount. Thus, as policyholders got older, the “net amount at risk” to the insurance company (the difference between the cash value and the face amount) would decline while the reserve built up tax free. The true objective was not to build up a “savings account,” but rather to create a reserve against a known future claim.
Cash value accounts are for mitigating the risk of insurance companies, so they can make money even though they are insuring you your “whole life” (well, up to age 95-100). In contrast, the way term life insurance policies make money is that a certain percentage of policies expire and are not renewed before the insured dies, so the insurance company keeps those premiums… but this is how insurance in general works, and it’s far more straight forward. You can always get a guaranteed renewable term policy, and then actually renew it.
It’s very dangerous to bundle life insurance and investments in whole life policies.
Hey, glad to help, and sorry if I came off as impatient (more than I usually do, anyway). And I’m in favor of DIY too, which is how I do my mutual fund/IRA investing, and why I complained about how online-unfriendly life insurance is. But the idea behind “infinite banking” (basically, using a mutual whole life insurance plan, which have been around for hundreds of years and endured very hard times robustly, as a savings account) is very much DIY, once you get it set up.
Again, take it with a grain of salt because I’m still researching this...
It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.
I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.
Well said.
Therefore, in at least some cases, I’m sure that people reject cryonics not because they’re too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce—and I don’t see how this different subjective preference can be considered “irrational” in any way.
I think this is true. Cryonics being the “correct choice” doesn’t just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can’t imagine people not valuing life extension in this way.
In particular, I personally can’t bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time.
I wouldn’t pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)
I haven’t yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don’t revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates—and in particular, many of those whose comments I’ve read on Overcoming Bias and here—make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you’ll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your “normal” self that you expect to be alive tomorrow.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn’t also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don’t feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.
In particular, I personally can’t bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I’m sure that people reject cryonics not because they’re too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don’t see how this different subjective preference can be considered “irrational” in any way.
That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people’s strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.
Would it change your mind if you discovered that you’re living in a simulation right now?
Roko:
It would probably depend on the exact nature of the evidence that would support this discovery. I allow for the possibility that some sorts of hypothetical experiences and insights that would have the result of convincing me that we live in a simulation would also have the effect of dramatically changing my intuitions about the question of personal identity. However, mere thought-experiment considerations of those I can imagine presently fail to produce any such change.
I also allow for the possibility that this is due to the limitations of my imagination and reasoning, perhaps caused by unidentified biases, and that actual exposure to some hypothetical (and presently counterfactual) evidence that I’ve already thought about could perhaps have a different effect on me than I presently expect it would.
For full disclosure, I should add that I see some deeper problems with the simulation argument that I don’t think are addressed in a satisfactory manner in the treatments of the subject I’ve seen so far, but that’s a whole different can of worms.
Well, a concrete scenario would be that the simulators calmly reveal themselves to you and demonstrate that they can break the laws of physics, for example by just wiggling the sun around in the sky, disconnecting your limbs without blood coming out or pain, making you float, etc.
That would fall under the “evidence that I’ve already thought about” mentioned above. My intuitions would undoubtedly be shaken and moved, perhaps in directions that I presently can’t even imagine. However, ultimately, I think I would be led to conclude that the whole concept of “oneself” is fundamentally incoherent, and that the inclination to hold any future entity or entities in special regard as “one’s future self” is just a subjective whim. (See also my replies to kodos96 in this thread.)
Interesting!
Seems a bit odd to me, but perhaps we should chat in more detail some time.
Would it change your mind if that computer program [claimed to] strongly identify with you?
I’m not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?
Well right, obviously a program consiting of “printf(“I am Vladmir_M”)” wouldn’t qualify… but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.
No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it’s me.
I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the “fading qualia” argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don’t see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.
If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like “yourself”, how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I’m just curious what you think, since I’ve never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)
For the robotic “me”—though not for anyone else—this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we’re pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.
Therefore, my answer would be that I don’t know how exactly the subjective intuitions and convictions of the robotic “me” would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don’t think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.
Of course, I am aware that a similar argument can be applied to the “normal me” who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it’s a matter of strong subjective preferences.
Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?
Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.
While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn’t identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don’t buy into that argument. If they don’t, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.
Your point about weirdness signaling is a good one, and I’d expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.
JoshuaZ:
The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject’s identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan’s principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point.
In any case, I honestly don’t see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.
This seems similar to something that I’ll arbitrarily decide to call the ‘argument from arbitrariness’: every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I’d be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.
I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah
In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can’t. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.
Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: “’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.” I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.
I don’t mean to insult you (I’m trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You’re admitting to false beliefs that you hold “because you evolved that way” rather than using reason to reconcile two intuitions that you “sort of follow” but which contradict each other.
Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can’t be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one’s hands but your own.
Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted “I think therefore I am” as one basis for a stable starting point.
But starting from that idea and a handful of others like “trust of our own memories as a sound basis for induction” we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time—one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our “initial epistemic conditions”. The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.
Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now… unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost “continuity of awareness” in the middle because your brain will go into a repair and update mode that’s not capable of sensing your environment or continuing to compute “continuity of awareness”.
If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.
Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you’re mostly focused on your “contextual value” (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).
The real thing to which you should be paying attention (other than to make sure they don’t stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.
For the record, I don’t have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.
Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I’ll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.
When I set up a “drake equation for cryonics” and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn’t even have terms for negative value outcomes like “loss of value in ‘some other context’ because of cryonics/simulationist interactions”.
So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.
Hi Jennifer. Perhaps I seem irrational because you haven’t understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.
No doubt I explained myself poorly on the subject of the continuity of the self. I won’t dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.
I don’t understand why psychological continuity isn’t enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don’t see why that matters.)
Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don’t care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.
We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.
Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.
Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values.
If a proposed intrinsic value is questioned and justified with another value statement, then the supposed “intrinsic value” is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of “simple” fact. And you are quite right that these facts (by definition as “non value statements”) will not be motivating.
We fundamentally like vanilla (if we do) “because we like vanilla” as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P
On the other hand… basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create “flow” where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on.
As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that “I don’t care what I will experience tomorrow” can be interpreted as a prediction that “Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions”. This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professional help if you want to live very much longer (which of course you might not if you’re really experiencing anhedonia), or else you are a non human intelligence and I have to start from scratch trying to figure you out.
Taking it as granted that you and I can both safely predict that you will continue to enjoy life tomorrow… then an inductive proof can be developed that “unless something important changes from one day to the next” you will continue to have a stake in the day after that, and the day after that, and so on. When people normally discuss cryonics and long term values it is the “something important changing” issue that they bring up.
For example, many people think that they only care about their children… until they start seeing their grandchildren as real human beings whose happiness they have a stake in, and in whose lives they might be productively involved.
Other people can’t (yet) imagine not falling prey to senescence, and legitimately think that death might be preferable to a life filled with pain which imposes costs (and no real benefits) on their loved ones who would care for them. In this case the critical insight is that not just death but also physical decline can be thought of as a potentially treatable condition and so we can stipulate not just vastly extended life but vastly extended youth.
But you are not making any of these points so that they can even be objected to by myself or others… You’re deploying the kind of arguments I would expect from an undergrad philosophy major engaged in motivated cognition because you have not yet “learned how to lose an argument gracefully and become smarter by doing so”.
And it is for this reason that I stand by the conclusion that in some cases beliefs about cryonics say very much about the level of pragmatic philosophical sophistication (or “rationality”) that a person has cultivated up to the point when they stake out one of the more “normal” anti-cryonics positions. In your case, you are failing in a way I find particularly tragic, because normal people raise much better objections than you are raising—issues that really address the meat of the matter. You, on the other hand, are raising little more than philosophical confusion in defense of your position :-(
Again, I intend these statements only in the hope that they help you and/or audiences who may be silently identifying with your position. Most people make bad arguments sometimes and that doesn’t make them bad people—in fact, it helps them get stronger and learn more. You are a good and valuable person even if you have made comments here that reveal less depth of thinking than might be hypothetically possible.
That you are persisting in your position is a good sign, because you’re clearly already pretty deep into the cultivation of rationality (your arguments clearly borrow a lot from previous study) to the point that you may harm yourself if you don’t push through to the point where your rationality starts paying dividends. Continued discussion is good practice for this.
On the other hand, I have limited time and limited resources and I can’t afford to spend any more on this line of conversation. I wish you good luck on your journey, perhaps one day in the very far future we will meet again for conversation, and memory of this interaction will provide a bit of amusement at how hopelessly naive we both were in our misspent “childhood” :-)
Why is psychological continuity important? (I can see that it’s very important for an identity to have psychological continuity, but I don’t see the intrinsic value of an identity existing if it is promised to have psychological continuity.)
In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don’t sense any responsibility to care about a future self that needn’t exist. On the contrary, if this person has no effect on anything that matters to me, I’d rather be free of being responsible for this future self.
In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don’t have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.
Less unique, I think, though perhaps not generally realized, is the fact that I don’t feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn’t be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.
I’ve just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html
He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn’t seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.
Interesting. However, I don’t see any problems with the nature of personal identity. My hunch is that I’m actually not confused about it.
In a lifetime, there is continuity of memories and continuity of values and goals even as they slowly change over time. I can trust that the person who wakes up tomorrow will be ‘me’ in this sense. She may be more refreshed and have more information, but I trust her her to act as “I” would. On the other hand, she might be excessively grouchy or suffer a brain injury, in which case this trust is misplaced. However, she is not me personal-identity wise for a variety of reasons:
I do not have access to her stream of consciousness.
I do not have operative control of her body.
[In both cases, the reason is because her thoughts and actions take place in the future. Eventually, I will have access to her thoughts and control of her body and then she becomes “me”.]
Personal identity exists only for a moment. It is the running of some type of mental thought process.
Suppose I was duplicated overnight, and two byrnemas woke up in the morning. Both byrnemas would have continuity with the previous byrnema with respect to memories, values and goals. However, neither of them are the personal identity of byrnema of the night before just as whenever I wake up I’m not the personal identity of the night before, exactly for the reasons I bulleted.
With the two duplicates, there would be two distinct personal identities. You simply count the number of independent accesses to thoughts and motor control of bodies and arrive at two. Both byrnema have a subjective experience of personal identity, of course, and consider the other byrnema an “other”. However, this “other” is similar to oneself in a way that is unprecedented, a twin sister that also has your memories, goals and values.
I think duplicates would be most problematic for loved ones. They would find themselves in a position of loving both duplicates, and being able to empathize with both, but not really caring so much if one was deleted, but being very distraught if both were deleted. That would be strange, because we haven’t had any experience with that, but I’m sure we would adjust well enough.
People would take risks with their person, but only after checking and double-checking that their backup was recent and well. People wouldn’t care if their person died—they would understand (now through experience rather than introspection) that what makes them them is their memories, values, goals and a moment. And the moment is transient anyway. The illusion of self existing for more than a moment would be broken.
The post you linked to by Ben Best mentioned the impossibility of a personal identity in two different physical locations. Actually, interestingly, it would be possible to have an identity in two physical locations. To do this, you would need to stream the sensory data of two bodies into a single brain, located anywhere. As long as the brain had access to both bodies’ sensory data, and could operate both bodies, and there was a single shared stream of consciousness, then that person would be physically located in two places at once. (But this is completely different from just duplicating a person.)
If you care about a person, then while you might not care as much if a recent duplicate or a recently duplicated person were lost, you would still care about as much if either of them suffers..
As is implied by my ‘recently’, the two will diverge, and you might end up with loyalty to both as distinct individuals, or with a preference for one of them.
Also, I don’t think parents value each of newborn twins less because they have a spare.
I’m in the signing process right now, and I wanted to comment on the “work in progress” aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true.
The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy).
So yeah, the term “working on it” is not correctly applicable to this situation. Someone who’s never climbed a flight of stairs may work out for months in preparation, but they really don’t need to, and afterwards might be somewhat annoyed that no one who’d climbed stairs before had bothered to tell them so.
Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked “What’s CI?” that it’s a place that’ll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact—on a daily basis—with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don’t die, and realize that you never would have.
The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life.
SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don’t currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up.
Secondarily, there are similarly complex social issues that come up because I’m married, love my family, am able to have philosophical conversations them, and don’t want to “succeed” at cryonics but then wake up for 1000 years of guilt that I didn’t help my family “win” too. If they don’t also win, when I could have helped them, then what kind of a daughter or sister would I be?
Finally, I’ve worked on a personal version of a “drake equation for cryonics” and it honestly wasn’t a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family—given that they are generally rational in their own personal ways :-)
Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may “stick” for the next forty years. Against this I weigh the issue of “best being the enemy of good” because (in point of fact) I’m not safe in any way at all right now… which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that “sloppiness calibration” over to the rest of my life?
So, yeah, its a work in progress.
I’m pretty much not afraid of the social issues that you brought up. If people who disagree with me about the state of the world want to judge me, that’s their problem up until they start trying to sanction me or spread malicious gossip that blocks other avenues of self improvement or success. The judgment of strangers who I’ll never see again is mostly a practical issue and not that relevant compared to relationships that really matter, like those with my husband, nuclear family, friends, personal physician, and so on.
Back in 1999 I examined these issues. In 2004 I got to the point of having all the paperwork to sign and turn in with Alcor and Insurance, with all costs pre-specified. In each case I backed off because I calculated the costs and looked at my income and looked at the things I’d need to cut out of my life (and none of it was coffee from starbucks or philanthropy or other fluffy BS like that—it was more like the simple quality of my food and whether I’d be able to afford one bedroom vs half a bedroom) and they honestly didn’t seem to be worth it. As I’ve gotten older and richer and more influential (and partly due to influence from this community) I’ve decided I should review the decision again.
The hard part for me is dotting the i’s and crossing the t’s (and trying to figure out where its safe to skip some of these steps) while seeking to minimize future regrets and maximize positive outcomes.
You can’t hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can’t view yourself as the primary source for their actions.
It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening.
The question is: to what degree is failing to sign up for cryonics like suicide by negligence?
I’m not finding this. Can you refer me to your trivially easy agency?
I used State Farm, because I’ve had car insurance with them since I could drive, and renters/owner’s insurance since I moved out on my own. I had discounts both for multi-line and loyalty.
Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax.
Heck, my agent had to do much more work than I did, previous to this she didn’t know that you can designate someone other than yourself as the owner of the policy, required some training.
I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn’t complete it for me. That was spooky. I don’t want to do that.
Huh. That is weird. I don’t blame you.
Come to think of it, I didn’t even bother asking what the blood sample was for. But I tend to be exceptionally un-private. I don’t expect privacy to be a part of life among beings who regularly share their source code.
It’s not a matter of privacy. I can’t think of much they’d put on the list that I wouldn’t be willing to let them have. (The agent acted like I could only possibly be worried that they were going to do genetic testing, but I’d let them do that as long as they, you know, told me, and gave me a copy of the results.) It was just really not okay with me that they wanted it for undisclosed purposes. Lack of privacy and secrets shouldn’t be unilateral.
Disagree. What’s this trivially easy part? You can’t buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you’ll have to get a medical exam, but still...)
Of course, in fairness, I’m trying to combine it with “infinite banking” by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don’t want to limit the policy to a specific term, risking that you’ll die afterward and no be able to afford the preservation, when the take-off hasn’t happened.)
Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you’ll end up way ahead.
Yes, I’m intimately familiar with the argument. And while I’m not committed to whole life, this particular point is extremely unpersuasive to me.
For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.
That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free.
If you “buy term and invest the difference”, you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you’re ~60. The optimistic “long term” returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in ’08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds—and especially not after taxes.
Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries’ fiscal situations, is looking more likely every day), they’ll most likely be hit before life insurance policies.
So yes, I’m aware of the argument, but there’s a lot about the calculation that people miss.
It’s really hard to understand insurance products with the information available on the internet, and you are right that it is extremely unfriendly to online research. When I investigated whole life vs. term a few years ago, I came to the conclusions that there are a lot of problems with whole life and I wouldn’t touch it with a ten foot pole.
Actually, there is something far weirder and insidious going on. By “extra cost,” I assume you are referring to the extra premium that goes into the insurance company’s cash value investment account, beyond the amount of premium that goes towards your death benefit (aka “face amount,” aka “what the insurance company pays to your beneficiary if you die while the policy is in force). Wait, what? Didn’t I mean your cash value account, and my words the “insurance company’s cash value account” were a slip of the tongue? Read on...
Let’s take a look at the FAQ of the NY Dept. of Insurance which explains the difference between the face amount of your policy (aka “death benefit” aka “what the insurance company pays to your beneficiary if you die while the policy is in force):
So, you have a $1 million face amount insurance policy. The premiums are set so that by age 100, “your” cash value investment account will have a value of $1 million. If you die right before turning 100, how much money will your beneficiary get?
If you guessed $1 million face amount + $1 million cash value account = $2 million, you guessed wrong. See the last quoted sentence: “If you die your beneficiaries will receive the face amount.” Your beneficiary gets the $1 million face amount, but the insurance company keeps the $1 million investment account to offset their loss (which would instead go to your beneficiary if you had done “buy term and invest the difference).
This is because the cash value account is not your money anymore. The account belongs to the insurance company; I’ve read whole life policies and seen this stated in the fine print that people don’t read. Now, you may think you can access this account, right? Yes and no. It’s true that the money in it grows tax-free, but getting your money from the account isn’t as simple as you might think.
You can’t just take money out of a cash value account. If you want to take money out of the cash value account without surrendering the entire policy, it is not usually a withdrawal, it’s a loan.The reason it’s called a “loan” is because, as we’ve established, the account is not really yours, it’s the insurance company’s! According to the FAQ, here is what happens when you try to take a loan on a cash value account (emphasis mine):
As it says, you can get the money out of the cash value account by surrendering your policy… but then you have no life insurance anymore (whereas with buy term and invest the difference, taking money out of an investment account may incur taxes if they are not already paid, but you don’t have to cancel your life insurance to do so). See the penultimate sentence of the first quote: “If you surrender your policy you will receive the cash value not the face amount.” Your coverage (the “face amount”) is gone if you surrender your policy to get the cash values. Here is what happens when you surrender the policy:
With “buy term and invest the difference,” if you take money out of your investment account, it doesn’t decrease the death benefit of your policy. Another article claims that you can do a partial withdrawal from the cash value account without it being a loan, but it can decrease the death benefit:
The cash value surrender values will be spelled out in a schedule in a whole life contract. And for the first 3-5 years, they can be dismal (and would be less than if you had invested the difference and withdrew it paying taxes). From the insure.com article (emphasis mine):
That whole article is a good read. Notice that even though a cash value account can match a “buy term and invest the difference” strategy that accumulates 4.6% a year, your beneficiary does not get the cash value investment if you die:
So if you die with the cash value account, your beneficiary gets $100,000, but if you die with the term strategy, your beneficiary gets $100,000 + the value of the investment account. If you die in year 20, that is $28,000 (don’t know if those dollars are taxed yet or not, but the difference is still stark), making the total gain by your beneficiary $128,000, instead of $100,000 with whole life.
So, what’s the deal with cash value accounts and why are they so wacky? To understand, realize that the cash value account is not an investment vehicle for you; it is a protection for the insurance company. From this article:
Cash value accounts are for mitigating the risk of insurance companies, so they can make money even though they are insuring you your “whole life” (well, up to age 95-100). In contrast, the way term life insurance policies make money is that a certain percentage of policies expire and are not renewed before the insured dies, so the insurance company keeps those premiums… but this is how insurance in general works, and it’s far more straight forward. You can always get a guaranteed renewable term policy, and then actually renew it.
It’s very dangerous to bundle life insurance and investments in whole life policies.
I believe “buy term and invest the difference” is the slogan of the Amway-like Multi Level Marketer (MLM, legal pyramid scheme) Primerica.
That’s how I first encountered it, too. But it seems to be mainstream and widely accepted advice that is confirmed independently.
Wow, thanks for all that! Upvoted. I’m biased in favor of DIY, but those are really good points and I didn’t realize some of that.
Hey, glad to help, and sorry if I came off as impatient (more than I usually do, anyway). And I’m in favor of DIY too, which is how I do my mutual fund/IRA investing, and why I complained about how online-unfriendly life insurance is. But the idea behind “infinite banking” (basically, using a mutual whole life insurance plan, which have been around for hundreds of years and endured very hard times robustly, as a savings account) is very much DIY, once you get it set up.
Again, take it with a grain of salt because I’m still researching this...
It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.
Well said.
I think this is true. Cryonics being the “correct choice” doesn’t just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can’t imagine people not valuing life extension in this way.
I wouldn’t pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)
Well, they say that cryonics works whether you believe in it or not. Why don’t give it a try?