In my experience, people who don’t compartmentalize tend to be cranks.
Because the world appears to contradict itself, most people act as if it does. Evolution has created many, many algorithms and hacks to help us navigate the physical and social worlds, to survive, and to reproduce. Even if we know the world doesn’t really contradict itself, most of us don’t have good enough meta-judgement about how to resolve the apparent inconsistencies (and don’t care).
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.
And that’s just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn’t try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.
FWIW, I’m not interested in cryonics. I think it’s not possible, but even if it were, I think I would not bother. Introspecting now, I’m not sure I can explain why. But it seems that natural death seems like a good point to say “enough is enough.” In other words, letting what’s been given be enough. And I am guessing that something similar will keep most of us uninterested in cryonics forever.
Now that I think of it, I see interest in cryonics as a kind of crankish pastime. It takes the mostly correct idea “life is good, death is bad” to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can’t be more specific).
To try to head off some objections:
I would certainly never dream of curtailing anyone else’s freedom to be cryo-preserved, and I recognize I might change my mind (I just don’t think it’s likely, nor worth much thought).
Yes, I recognize how wonderful medical science is, but I see a qualitative difference between living longer and living forever.
No, I don’t think I will change my mind about this as my own death approaches (but I’ll probably find out). Nor do I think I would change my mind if/when the death of a loved one becomes a reality.
I offer this comment, not in an attempt to change anyone’s mind, but to go a little way to answer the question “Why are some people not interested in cryonics?”
It takes the mostly correct idea “life is good, death is bad” to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can’t be more specific).
It seems to me that you can’t be more specific because there is not anything there to be more specific about.
Right now, we’re all going to die eventually, so we can make tradeoffs between life and other values that we still consider to be essential. But when you take away that hard stop, your own life’s value suddenly skyrockets—given that you can almost certainly, eventually, erase any negative feelings you have about actions done today, it becomes hard to justify not doing horrible things to save one’s own life if one was forced to.
Imagine Omega came to you and said, “Cryonics will work; you will be resurrected and have the choice between a fleshbody and simulation, and I can guarantee you live for 10,000 years after that. However, for reasons I won’t divulge, this is contingent upon you killing the next 3 people you see.”
You make a valid theoretical point, but as a matter of contingent fact, the only consequence I see is that people signed up will strongly avoid risks of having their brains splattered. Less motorcycle riding, less joining the army, etc.
Making people more risk-averse might indeed give them pause at throwing themselves in front of cars to save a kid, but:
Snap judgments are made on instinct at a level that doesn’t respond to certain factors; you wouldn’t be any less likely to react that way if you previously had the conscious knowledge that the kid had leukemia and wouldn’t be cryopreserved.
In this day and age, risking your life for someone or something else with conscious premeditation does indeed happen even to transhumanists, but extremely rarely. The fringe effect of risk aversion among people signed up for cryonics isn’t worth consigning all of their lives to oblivion.
Thanks for that generous spirit. But fine: You see a woman being dragged into an alley by a man with a gun.
Scenario A) You have terminal brain cancer and you have 3 months to live. You read that morning that scientists have learned several new complications arising from freezing a brain.
Scenario B) Your cryonics arrangements papers went through last night. You read that morning that scientists have successfully simulated a dog’s brain in hardware after the dog has been cryogenically frozen for a year.
Well, it’s not like I have much of a chance of saving the woman. He has a gun, and I don’t. Whether the woman gets shot is entirely up to the man with the gun. If I try to interfere (and I haven’t contacted the police yet), I think that I’m as likely to make things worse than I am to help. For example, the man with the gun might panic if it seems like he’s losing control of the situation. I’m also physically weaker than most men, so the chances of my managing to overpower him with my bare hands are pretty small.
So, either way, I probably won’t try to be Batman.
This strikes me as purposefully obtuse. Does cryonics increase the present value of future expected life? I think it does. Does that increase affect decisions where we risk our life? I think it does; do you agree?
Yes, I basically agree; I was mostly nitpicking the specific scenario instead of addressing the issue.
If I modify the scenario a bit and say that the assailant has a knife instead of a gun (and my phone’s batteries are dead), then things are different. If he has a knife, intervening is still dangerous, but it’s much easier to save the woman—all I need to do is put some distance between the two so that the woman can run away. I might very well be seriously injured or killed in the process, but I can at least count on saving the woman from whatever the assailant had in store for her. (This is probably the least convenient possible world that you wanted.)
So, yes, I’d be much more likely to play hero against a knife-wielding assailant if I had brain cancer than if I were healthy and had heard about a major cryonics breakthrough.
This seems unusual. You are much more likely to be injured against a knife than you are against a gun. I am moderately confident that I can take a handgun away from someone before they shoot me, given sufficiently close conditions; I am much less confident in my ability to deal with a knife.
In robberies and assaults, victims are far more likely to die when the perpetrator is armed with a gun than when he or she has another weapon or is unarmed.
Injury rates were higher for robbers with knives, but people are probably less likely to fight back or otherwise provoke a robber with a gun.
You are much more likely to be injured against a knife than you are against a gun. I am moderately confident that I can take a handgun away from someone before they shoot me, given sufficiently close conditions; I am much less confident in my ability to deal with a knife.
That makes the knife scenario an even better dilemma than the gun scenario!
The reason I’m more likely to intervene against a knife is that it’s easier to protect the woman from a knife than from a gun. Against a knife, all she needs is some time to start running, but if a gun is involved, I need to actually subdue the assailant, which I can’t. After all, he is bigger and stronger than me, and even has a weapon that can do serious damage. If all he has is a knife, though, all I need to do is buy enough time; even if I end up dead, the woman will probably get away.
He was just responding to the specific scenario you posited. The fact that you had the broader issue of the effect of cryonics on the value of life at the forefront of your mind does not mean that his failure to comment on it is evidence of purposeful obtuseness.
Commenting in this thread, on this post, and it’s unrecognizable to someone that the effects of cryonics on the value of life is what’s being discussed? I’m not buying it.
I don’t find it contrary to expectation that someone might get caught up in the discussion of the concrete scenario presented to them and ignore the more abstract issue prompting the scenario. Furthermore, the Recent Comments page makes it easy for people to jump into the middle of a conversation without necessarily reading upthread (e.g., Vladimir Nesov today).
I think it’s not possible, but even if it were, I think I would not bother. Introspecting now, I’m not sure I can explain why. But it seems that natural death seems like a good point to say “enough is enough.” In other words, letting what’s been given be enough.
-Longer life has never been given; it has always been taken. There is no giver.
-”Enough is enough” is sour grapes—“I probably don’t have access to living forever, so it’s easier to change my values to be happy with that than to want yet not attain it.” But if it were a guarantee, and everyone else was doing it (as they would if it were a guarantee), then this position would be the equivalent to advocating suicide at some ridiculously young age in the current era.
It takes the mostly correct idea “life is good, death is bad” to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can’t be more specific).
I assert that the more extremely the idea “life is good, death is bad” is held, the more benefit other valuable parts of our humanity are rendered. I can’t be more specific.
I’m not quite convinced of the merits of investing in cryonics at this point, though “enough is enough” does not strike me as a particularly salient argument either.
In terms of weighing the utility to me based on some nebulous personal function:
Cryonics has an opportunity cost in terms of direct expenses and additionally in terms of my social interactions with other people. Both of these seem to be nominal, though the perhaps $300 or so dollars a year could add quite a bit of utility to my current life as I live on about $7K per year. Though I very well may die today, not having spent any of that potential money.
On the other side being revived in the distant future could be quite high in terms of personal utility. Though, I have no reason at all to believe the situation will be agreeable; in other words, permanent death very well could be for the best. I would imagine reviving a person from vitrification would be a costly venture even barring future miracle technology. Revival is not currently possible and there is no reason to think the current processes are being done in any sort of optimal way. At the very least, the cost of creating the tech to revive people will be expensive. Future tech or not, I see it likely that revival will come at some cost with perhaps no choice given to me in the matter. I see this as a likely possibility (at least more likely than a benevolent AI utopia) as science has never fundamentally made people better (more rational?)- so far at least; it certainly ticks forward and may improve the lives of some people, but they are all still fundamentally motivated by the same vestigial desires and all have the same deficiencies as before. Given our nature, I see the most likely outcome, past the novelty of the first couple of successful attempts, being some quid pro quo.
Succinctly, my projection of the most likely state of the world in which I would be revived is the same as today though with more advanced technology. Very often the ones to pioneer new technology aren’t scrupulous. I very well may choose a non existence to one of abject suffering or one where my mind may be used to hurt others, etc. This would be an optimization for the worst case scenario.
(Edit: after having written this entire giant thing, I notice you saying that this was just a “why are some people not interested in cryo” comment, whereas I very much am trying to change your mind. I don’t like trying to change people’s minds without warning (I thought we were having that sort of discussion, but apparently we aren’t), so here’s warning.)
But it seems that natural death seems like a good point to say “enough is enough.” In other words, letting what’s been given be enough.
You’re aware that your life expectancy is about 4 times that of the people who built the pyramids, even the Pharoahs, right? That assertion seems to basically be slapping all of your ancestors in the face. “I don’t care that you fought and died for me to have a longer, better life; you needn’t have bothered, I’m happy to die whenever”. Seriously: if natural life span is good enough for you, start playing russian roulette once a year around 20 years old; the odds are about right for early humans.
As a sort-of aside, I honestly don’t see a lot of difference between “when I die is fine” and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.
I’m aware this is a minority view, but that doesn’t necessarily make it any less sensible; insert historical examples of once-popular-but-wrong views here.
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes.
Then they’ve failed at the actual task, which is to make all of your beliefs fit with reality.
When we get to human values, some of them REALLY ARE in conflict with others,
My values are part of reality. Some of them are more important than others. Some of them contradict each other. Knowing these things is part of what lining my beliefs up with reality means: if my map of reality doesn’t include the fact that some of my values contradict, it’s a pretty bad map.
You seem to have confused people who are trying to force their beliefs to line up witheachother (an easy path to crazy, because you can make any belief line up with any other belief simply by inserting something crazy in the middle; it’s all in your head after all) with people with people who are trying to force their beliefs to line up withreality. It’s a very different process.
Part of reality is that one of my most dominant values, one so dominant that almost no other values touch its power, is the desire to keep existing and to keep the other people I care about existing. I’m aware that this is selfish, and my compromise is that if reviving me will use such resources that other people would starve to death or something, I don’t want to be revived (and I believe my cryo documents specify this; or maybe not, it’s kind of obvious, isn’t it??). I don’t have any difficulty lining up this value with the rest of my values; except for pretty landscapes, everything I value has come from other humans.
In some sense, I don’t try to line this, or any other value, up with reality; I’m basically a moral skeptic. I have beliefs that are composed of both values (“death is bad”) and statements about reality (“cryo has a better chance of saving me from death than cremation”) such that the resulting belief (“cryo is good”) is subservient to both matching up with reality (although I doubt anyone will come up with evidence that cryo is less likely to keep you alive than cremation) and my values, but having values and conforming my beliefs with reality are totally separate things.
Careful with life-expectancy figures from earlier eras. There was a great chance of dying as a baby, and a great chance for women to die of childbirth. Excluding the first—that is, just counting those that made it to, say, 5 years old, and the life-expectancy greatly shoots up, though obviously not as high as now.
As a sort-of aside, I honestly don’t see a lot of difference between “when I die is fine” and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.
An important reason for not dying at the moment is that it would make the people you most care about very distraught. Dying by suicide would make them even more distraught. Signing up for cryonics would not make them less distraught and would lead to social disapproval. Not committing suicide doesn’t require that one place a great deal of intrinsic value in one’s own continued existence.
I think if the only reason you’re staying alive is to stop other people from being sad, you’ve got a psychological bug WRT valuing yourself for your own sake that you really need to work on, but that is (obviously) a personal value judgment. If that is the only reason, though, you’re right, suicide is bad and cryo is as bad or worse.
I imagine that such a person will have a really shitty life whenever people close to them leave or die; sounds really depressing. I can only hope, for their sake, that such a person dies before their significant other(s).
As a sort-of aside, I honestly don’t see a lot of difference between “when I die is fine” and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.
You seem to have confused people who are trying to force their beliefs to line up with each other [...] with people with people who are trying to force their beliefs to line up with reality.
When it comes to our values, there is no “reality”, but we can hope to adjust them to be coherent and consistent under reflection. I think your paragraph “As a sort-of aside” is an example of exactly that kind of moral thinking.
As a sort-of aside, I honestly don’t see a lot of difference between “when I die is fine” and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.
This statement is simply not true in this form. My survival instincts prevent me from committing suicide, but they don’t tell me anything about cryonics. On another thread, VijayKrishnan explained this quite clearly:
We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don’t think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing.
One can try to construct a low-complexity formalized approximation to our survival instincts. (“This is how you would feel about it if you were smarter.”) I have two issues with this. First, these will not actually be instincts (unless we rewire our brain to make them so). Second, I’m not sure that such a formalization will logically imply cryonics. Here is a sort of counterexample:
On a more abstract level, the important thing about “having a clone in the future” aka survival is that you have the means to influence the future. So in a contrived thought experiment you may objectively prefer choosing “heroic, legendary death that inspires billions” to “long, dull existence”, as the former influences the future more. And this formalization/reinterpretation of survival is, of course, in line with what writers and poets like to tell us.
My survival instincts prevent me from committing suicide, but they don’t tell me anything about cryonics.
Well, your instincts evolved primarily to handle direct, immediate threats to your life. You could say the same thing about smoking cigarettes (or any other health risk): “My survival instincts prevent me from committing suicide, but they don’t tell me anything about whether to smoke or not.”
But your instincts respond to your beliefs about the world. If you know the health risks of smoking, you can use that to trigger your survival instincts, perhaps with the emotional aid of photos or testimony from those with lung cancer. The same is true for cryonics: once you know enough, not signing up for cryonics is another thing that shortens your life, a “slow suicide”.
Everyone who had serious philosophical conundra on that subject just, you know, died, a generation before. The Bitchun Society didn’t need to convert its detractors, just outlive them.
Even if you don’t think life extension technologies are a good thing, it’s only a matter of time before almost everyone thinks they are. Whatever part of “humanity” you value more than life will be gone forever.
ETA: Actually, there is an out: if you build FAI or some sort of world government and it enforces 20th century life spans on people. I can’t say natural life spans because our lives were much shorter before modern sanitation and medicine.
Even if you don’t think life extension technologies are a good thing, it’s only a matter of time before almost everyone thinks they are. Whatever part of “humanity” you value more than life will be gone forever.
Doesn’t this argument imply that we should self-modify to become monomaniacal fitness-maximizers, devoting every quantum of effort towards the goal of tiling the universe with copies of ourselves? Hey, if you don’t, someone else will! Natural selection marches on; it’s only a matter of time.
I find the likelihood of someone eventually doing this successfully to be very scary. And more generally, the likelihood of natural selection continuing post-AGI, leading to more Hansonian/Malthusian futures.
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.
This is not true of all non-compartmentalizers—just the ones you have noticed and remember. Rational non-compartmentalizers simply hold on to that puzzle piece that doesn’t fit until they either
determine where it goes;
determine that it is not from the right puzzle; or
In my experience, people who don’t compartmentalize tend to be cranks.
[. . .]
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.
And that’s just the physical world. When we get to human values, some of them REALLY ARE in conflict with others[. . .]
People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense—they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do.
[. . .]
The reason I bring this up is that intelligent people sometimes do things more stupid than stupid people are capable of. There are a variety of reasons for this; but one has to do with the fact that all cultures have dangerous memes circulating in them, and cultural antibodies to those memes. The trouble is that these antibodies are not logical[. . . .] They are the blind spots that let us live with a dangerous meme without being impelled to action by it.
If the culture is constrained to hold constant the religion or cultural norms, then the resulting selection will cause the culture to develop blind spots, and also develop an unspoken (because unspeakable) but viciously enforced meta-norm of not seeing the blind spots. But if the culture is constrained to hold opposite meta-norms constant, such as a norm of seeing the blind spots or a norm of actually doing what one’s religion or cultural norms tell one do do, then the resulting selection will act against the dangerous memes instead.
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.
Going to quote this.
And that’s just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn’t try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.
This is the most sane thing I’ve read on Less Wrong lately.
It is probably true that there are things you value differently than other people here that causes you to be less interested in cryonics. However, for a site that says individuals get to choose their values they can simultaneously be very presumptuous about what those values are.
OK, this isn’t the first time I miscommunicated...today.
I was trying to be extra careful with my language and refrain from attacking because the opinion expressed is a minority one around here.
What I was trying to point out is the difference between getting patched up for a longer span than 50-100 years like we do now and getting patched up for 200-1000 or so is smaller than that between living 200-1000 years and living forever. “Forever” is obviously ridiculous because it violates laws of physics and probability.
I was trying to do something less combative than accuse the commenter of either consciously misrepresenting the concept, subconsciously being too irrational to understand it, or letting bias twist his words into falsity.
My point is excellent, however poor at expressing it I am.
Living forever is not what’s usually under discussion. “Living indefinitely” would be more accurate.
It really goes to the core of what erniebornheimer said:
Yes, I recognize how wonderful medical science is, but I see a qualitative difference between living longer and living forever.
One reason his comment is so good is that it preempted and squarely responded to anticipated objections. My point is that I think the response to this one relies on a fallacy of equivocation to seem persuasive.
At the risk of revealing my stupidity...
In my experience, people who don’t compartmentalize tend to be cranks.
Because the world appears to contradict itself, most people act as if it does. Evolution has created many, many algorithms and hacks to help us navigate the physical and social worlds, to survive, and to reproduce. Even if we know the world doesn’t really contradict itself, most of us don’t have good enough meta-judgement about how to resolve the apparent inconsistencies (and don’t care).
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.
And that’s just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn’t try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.
FWIW, I’m not interested in cryonics. I think it’s not possible, but even if it were, I think I would not bother. Introspecting now, I’m not sure I can explain why. But it seems that natural death seems like a good point to say “enough is enough.” In other words, letting what’s been given be enough. And I am guessing that something similar will keep most of us uninterested in cryonics forever.
Now that I think of it, I see interest in cryonics as a kind of crankish pastime. It takes the mostly correct idea “life is good, death is bad” to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can’t be more specific).
To try to head off some objections:
I would certainly never dream of curtailing anyone else’s freedom to be cryo-preserved, and I recognize I might change my mind (I just don’t think it’s likely, nor worth much thought).
Yes, I recognize how wonderful medical science is, but I see a qualitative difference between living longer and living forever.
No, I don’t think I will change my mind about this as my own death approaches (but I’ll probably find out). Nor do I think I would change my mind if/when the death of a loved one becomes a reality.
I offer this comment, not in an attempt to change anyone’s mind, but to go a little way to answer the question “Why are some people not interested in cryonics?”
Thanks!
It seems to me that you can’t be more specific because there is not anything there to be more specific about.
What the hell, I’ll play devil’s advocate.
Right now, we’re all going to die eventually, so we can make tradeoffs between life and other values that we still consider to be essential. But when you take away that hard stop, your own life’s value suddenly skyrockets—given that you can almost certainly, eventually, erase any negative feelings you have about actions done today, it becomes hard to justify not doing horrible things to save one’s own life if one was forced to.
Imagine Omega came to you and said, “Cryonics will work; you will be resurrected and have the choice between a fleshbody and simulation, and I can guarantee you live for 10,000 years after that. However, for reasons I won’t divulge, this is contingent upon you killing the next 3 people you see.”
Well, shit. Let the death calculus begin.
You make a valid theoretical point, but as a matter of contingent fact, the only consequence I see is that people signed up will strongly avoid risks of having their brains splattered. Less motorcycle riding, less joining the army, etc.
Making people more risk-averse might indeed give them pause at throwing themselves in front of cars to save a kid, but:
Snap judgments are made on instinct at a level that doesn’t respond to certain factors; you wouldn’t be any less likely to react that way if you previously had the conscious knowledge that the kid had leukemia and wouldn’t be cryopreserved.
In this day and age, risking your life for someone or something else with conscious premeditation does indeed happen even to transhumanists, but extremely rarely. The fringe effect of risk aversion among people signed up for cryonics isn’t worth consigning all of their lives to oblivion.
I don’t worry about this for the same reason that Eliezer doesn’t worry about waking up with a blue tentacle for his arm.
Thanks for that generous spirit. But fine: You see a woman being dragged into an alley by a man with a gun.
Scenario A) You have terminal brain cancer and you have 3 months to live. You read that morning that scientists have learned several new complications arising from freezing a brain.
Scenario B) Your cryonics arrangements papers went through last night. You read that morning that scientists have successfully simulated a dog’s brain in hardware after the dog has been cryogenically frozen for a year.
Now what?
Obviously, you dial 911 on your cell phone. (Or whatever the appropriate emergency number is in your area.)
The generous spirit overfloweth. You don’t have a cell phone. Or it’s broken.
Well, it’s not like I have much of a chance of saving the woman. He has a gun, and I don’t. Whether the woman gets shot is entirely up to the man with the gun. If I try to interfere (and I haven’t contacted the police yet), I think that I’m as likely to make things worse than I am to help. For example, the man with the gun might panic if it seems like he’s losing control of the situation. I’m also physically weaker than most men, so the chances of my managing to overpower him with my bare hands are pretty small.
So, either way, I probably won’t try to be Batman.
This strikes me as purposefully obtuse. Does cryonics increase the present value of future expected life? I think it does. Does that increase affect decisions where we risk our life? I think it does; do you agree?
Yes, I basically agree; I was mostly nitpicking the specific scenario instead of addressing the issue.
If I modify the scenario a bit and say that the assailant has a knife instead of a gun (and my phone’s batteries are dead), then things are different. If he has a knife, intervening is still dangerous, but it’s much easier to save the woman—all I need to do is put some distance between the two so that the woman can run away. I might very well be seriously injured or killed in the process, but I can at least count on saving the woman from whatever the assailant had in store for her. (This is probably the least convenient possible world that you wanted.)
So, yes, I’d be much more likely to play hero against a knife-wielding assailant if I had brain cancer than if I were healthy and had heard about a major cryonics breakthrough.
This seems unusual. You are much more likely to be injured against a knife than you are against a gun. I am moderately confident that I can take a handgun away from someone before they shoot me, given sufficiently close conditions; I am much less confident in my ability to deal with a knife.
From http://www.ncjrs.gov/txtfiles/fireviol.txt
Injury rates were higher for robbers with knives, but people are probably less likely to fight back or otherwise provoke a robber with a gun.
That makes the knife scenario an even better dilemma than the gun scenario!
The reason I’m more likely to intervene against a knife is that it’s easier to protect the woman from a knife than from a gun. Against a knife, all she needs is some time to start running, but if a gun is involved, I need to actually subdue the assailant, which I can’t. After all, he is bigger and stronger than me, and even has a weapon that can do serious damage. If all he has is a knife, though, all I need to do is buy enough time; even if I end up dead, the woman will probably get away.
He was just responding to the specific scenario you posited. The fact that you had the broader issue of the effect of cryonics on the value of life at the forefront of your mind does not mean that his failure to comment on it is evidence of purposeful obtuseness.
Commenting in this thread, on this post, and it’s unrecognizable to someone that the effects of cryonics on the value of life is what’s being discussed? I’m not buying it.
I don’t find it contrary to expectation that someone might get caught up in the discussion of the concrete scenario presented to them and ignore the more abstract issue prompting the scenario. Furthermore, the Recent Comments page makes it easy for people to jump into the middle of a conversation without necessarily reading upthread (e.g., Vladimir Nesov today).
There was an apology edited into that.
if you live in the sorts of neighborhoods where women get dragged into alleys not having a gun seems pretty negligent.
-Longer life has never been given; it has always been taken. There is no giver.
-”Enough is enough” is sour grapes—“I probably don’t have access to living forever, so it’s easier to change my values to be happy with that than to want yet not attain it.” But if it were a guarantee, and everyone else was doing it (as they would if it were a guarantee), then this position would be the equivalent to advocating suicide at some ridiculously young age in the current era.
I assert that the more extremely the idea “life is good, death is bad” is held, the more benefit other valuable parts of our humanity are rendered. I can’t be more specific.
I’m not quite convinced of the merits of investing in cryonics at this point, though “enough is enough” does not strike me as a particularly salient argument either.
In terms of weighing the utility to me based on some nebulous personal function: Cryonics has an opportunity cost in terms of direct expenses and additionally in terms of my social interactions with other people. Both of these seem to be nominal, though the perhaps $300 or so dollars a year could add quite a bit of utility to my current life as I live on about $7K per year. Though I very well may die today, not having spent any of that potential money.
On the other side being revived in the distant future could be quite high in terms of personal utility. Though, I have no reason at all to believe the situation will be agreeable; in other words, permanent death very well could be for the best. I would imagine reviving a person from vitrification would be a costly venture even barring future miracle technology. Revival is not currently possible and there is no reason to think the current processes are being done in any sort of optimal way. At the very least, the cost of creating the tech to revive people will be expensive. Future tech or not, I see it likely that revival will come at some cost with perhaps no choice given to me in the matter. I see this as a likely possibility (at least more likely than a benevolent AI utopia) as science has never fundamentally made people better (more rational?)- so far at least; it certainly ticks forward and may improve the lives of some people, but they are all still fundamentally motivated by the same vestigial desires and all have the same deficiencies as before. Given our nature, I see the most likely outcome, past the novelty of the first couple of successful attempts, being some quid pro quo.
Succinctly, my projection of the most likely state of the world in which I would be revived is the same as today though with more advanced technology. Very often the ones to pioneer new technology aren’t scrupulous. I very well may choose a non existence to one of abject suffering or one where my mind may be used to hurt others, etc. This would be an optimization for the worst case scenario.
(Edit: after having written this entire giant thing, I notice you saying that this was just a “why are some people not interested in cryo” comment, whereas I very much am trying to change your mind. I don’t like trying to change people’s minds without warning (I thought we were having that sort of discussion, but apparently we aren’t), so here’s warning.)
You’re aware that your life expectancy is about 4 times that of the people who built the pyramids, even the Pharoahs, right? That assertion seems to basically be slapping all of your ancestors in the face. “I don’t care that you fought and died for me to have a longer, better life; you needn’t have bothered, I’m happy to die whenever”. Seriously: if natural life span is good enough for you, start playing russian roulette once a year around 20 years old; the odds are about right for early humans.
As a sort-of aside, I honestly don’t see a lot of difference between “when I die is fine” and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.
I’m aware this is a minority view, but that doesn’t necessarily make it any less sensible; insert historical examples of once-popular-but-wrong views here.
Then they’ve failed at the actual task, which is to make all of your beliefs fit with reality.
My values are part of reality. Some of them are more important than others. Some of them contradict each other. Knowing these things is part of what lining my beliefs up with reality means: if my map of reality doesn’t include the fact that some of my values contradict, it’s a pretty bad map.
You seem to have confused people who are trying to force their beliefs to line up with each other (an easy path to crazy, because you can make any belief line up with any other belief simply by inserting something crazy in the middle; it’s all in your head after all) with people with people who are trying to force their beliefs to line up with reality. It’s a very different process.
Part of reality is that one of my most dominant values, one so dominant that almost no other values touch its power, is the desire to keep existing and to keep the other people I care about existing. I’m aware that this is selfish, and my compromise is that if reviving me will use such resources that other people would starve to death or something, I don’t want to be revived (and I believe my cryo documents specify this; or maybe not, it’s kind of obvious, isn’t it??). I don’t have any difficulty lining up this value with the rest of my values; except for pretty landscapes, everything I value has come from other humans.
In some sense, I don’t try to line this, or any other value, up with reality; I’m basically a moral skeptic. I have beliefs that are composed of both values (“death is bad”) and statements about reality (“cryo has a better chance of saving me from death than cremation”) such that the resulting belief (“cryo is good”) is subservient to both matching up with reality (although I doubt anyone will come up with evidence that cryo is less likely to keep you alive than cremation) and my values, but having values and conforming my beliefs with reality are totally separate things.
-Robin
Careful with life-expectancy figures from earlier eras. There was a great chance of dying as a baby, and a great chance for women to die of childbirth. Excluding the first—that is, just counting those that made it to, say, 5 years old, and the life-expectancy greatly shoots up, though obviously not as high as now.
An important reason for not dying at the moment is that it would make the people you most care about very distraught. Dying by suicide would make them even more distraught. Signing up for cryonics would not make them less distraught and would lead to social disapproval. Not committing suicide doesn’t require that one place a great deal of intrinsic value in one’s own continued existence.
That’s a really good point.
I think if the only reason you’re staying alive is to stop other people from being sad, you’ve got a psychological bug WRT valuing yourself for your own sake that you really need to work on, but that is (obviously) a personal value judgment. If that is the only reason, though, you’re right, suicide is bad and cryo is as bad or worse.
I imagine that such a person will have a really shitty life whenever people close to them leave or die; sounds really depressing. I can only hope, for their sake, that such a person dies before their significant other(s).
-Robin
This is the Reversal test.
When it comes to our values, there is no “reality”, but we can hope to adjust them to be coherent and consistent under reflection. I think your paragraph “As a sort-of aside” is an example of exactly that kind of moral thinking.
This statement is simply not true in this form. My survival instincts prevent me from committing suicide, but they don’t tell me anything about cryonics. On another thread, VijayKrishnan explained this quite clearly:
One can try to construct a low-complexity formalized approximation to our survival instincts. (“This is how you would feel about it if you were smarter.”) I have two issues with this. First, these will not actually be instincts (unless we rewire our brain to make them so). Second, I’m not sure that such a formalization will logically imply cryonics. Here is a sort of counterexample:
On a more abstract level, the important thing about “having a clone in the future” aka survival is that you have the means to influence the future. So in a contrived thought experiment you may objectively prefer choosing “heroic, legendary death that inspires billions” to “long, dull existence”, as the former influences the future more. And this formalization/reinterpretation of survival is, of course, in line with what writers and poets like to tell us.
Well, your instincts evolved primarily to handle direct, immediate threats to your life. You could say the same thing about smoking cigarettes (or any other health risk): “My survival instincts prevent me from committing suicide, but they don’t tell me anything about whether to smoke or not.”
But your instincts respond to your beliefs about the world. If you know the health risks of smoking, you can use that to trigger your survival instincts, perhaps with the emotional aid of photos or testimony from those with lung cancer. The same is true for cryonics: once you know enough, not signing up for cryonics is another thing that shortens your life, a “slow suicide”.
You seem to have two objections to cryonics:
Cryonics won’t work.
Life extension is bad.
#1 is better addressed by the giant amount of information already written on the subject.
For #2 I’d like to quote a bit of Down and Out in the Magic Kingdom:
Even if you don’t think life extension technologies are a good thing, it’s only a matter of time before almost everyone thinks they are. Whatever part of “humanity” you value more than life will be gone forever.
ETA: Actually, there is an out: if you build FAI or some sort of world government and it enforces 20th century life spans on people. I can’t say natural life spans because our lives were much shorter before modern sanitation and medicine.
Doesn’t this argument imply that we should self-modify to become monomaniacal fitness-maximizers, devoting every quantum of effort towards the goal of tiling the universe with copies of ourselves? Hey, if you don’t, someone else will! Natural selection marches on; it’s only a matter of time.
I find the likelihood of someone eventually doing this successfully to be very scary. And more generally, the likelihood of natural selection continuing post-AGI, leading to more Hansonian/Malthusian futures.
For #2, there’s also Nick Bostrom’s Fable of the Dragon-Tyrant.
This is not true of all non-compartmentalizers—just the ones you have noticed and remember. Rational non-compartmentalizers simply hold on to that puzzle piece that doesn’t fit until they either
determine where it goes;
determine that it is not from the right puzzle; or
reshape it to correctly fit the puzzle.
The post “Reason as memetic immune disorder” was related. I’ll quote teasers so that you’ll read it:
And my comment there:
Going to quote this.
And this.
This is the most sane thing I’ve read on Less Wrong lately.
It is probably true that there are things you value differently than other people here that causes you to be less interested in cryonics. However, for a site that says individuals get to choose their values they can simultaneously be very presumptuous about what those values are.
I was surprised by this word choice. No amount of medicine can make people immune to damage as if in a video game with cheats enabled.
OK, this isn’t the first time I miscommunicated...today.
I was trying to be extra careful with my language and refrain from attacking because the opinion expressed is a minority one around here.
What I was trying to point out is the difference between getting patched up for a longer span than 50-100 years like we do now and getting patched up for 200-1000 or so is smaller than that between living 200-1000 years and living forever. “Forever” is obviously ridiculous because it violates laws of physics and probability.
I was trying to do something less combative than accuse the commenter of either consciously misrepresenting the concept, subconsciously being too irrational to understand it, or letting bias twist his words into falsity.
My point is excellent, however poor at expressing it I am.
Living forever is not what’s usually under discussion. “Living indefinitely” would be more accurate.
It really goes to the core of what erniebornheimer said:
One reason his comment is so good is that it preempted and squarely responded to anticipated objections. My point is that I think the response to this one relies on a fallacy of equivocation to seem persuasive.