I don’t feel grief when somebody gets cryosuspended. Seriously, I don’t, so far as I can tell. I feel awful when I read about someone who wasn’t cryosuspended.
Would that be useful? I expect cryonics to basically work on a technical level. Most of my probability mass for not seeing them again is concentrated in Everett branches where I and the rest of the human species are dead, and for some odd reason that feels like it should make a difference—if somebody goes to Australia for fifty years, are perfectly healthy, and most of my probability mass for not seeing them again is the Earth being wiped out in the meanwhile, I wouldn’t mourn them more than I’d mourn anyone else in danger.
I expect cryonics to basically work on a technical level.
Even given this, I would doubt:
Most of my probability mass for not seeing them again is concentrated in Everett branches where I and the rest of the human species are dead,
E.g. what about the fact that cryonics organizations have the financial structure of precarious defined-benefit pension plans during a demographic decline and massive population aging, save that those currently receiving pensions can’t complain if they are cut?.
I wouldn’t mourn them more than I’d mourn anyone else in danger.
I share and endorse-as-psychologically-healthy your general attitude to grief in this kind of situation. Both the broad principle “Would that be useful?” and the more specific evaluation of the actual loss in expected-deaths with existential considerations in mind. That said, I would suggest that there is in fact reason to mourn more than for anyone else in danger. To the extent that mourning bad things is desirable, in this case you would mourn (1 - p(chance of positive transhumanist future)) * (value of expected life if immediate cause of death wasn’t there).
Compare two universes:
Luna is living contentedly at the age of 150 years. Then someone MESSES WITH TIME, the planet explodes, and so Rocks Fall, Everybody Dies.
Luna dies at 25, is cryopreserved then 125 years later someone MESSES WITH TIME, the planet explodes, and so Rocks Fall, Everybody Dies.
All else being equal I prefer the first universe to the second one. I would pay more Sickles to make the first universe exist than for the second to exist. If I was a person inclined towards mourning, was in such a universe, the temporary death event occurred unexpectedly and I happened to assign p(DOOM)=0.8 then I would mourn the loss 0.8 * (however much I care about Luna’s previously expected 125 years). This is in addition to having a much greater preference that the DOOM doesn’t occur, both for Luna’s sake and everyone else’s.
I agree that the first universe is better, but I’d be way too busy mourning the death of the planet to mourn the interval between those two outcomes if the planet was actually dead. You could call that mental accounting, but isn’t everything?
Makes sense. I was thinking about the chance of cryonics working, generally… but it also makes sense to think about chances of cryonics working conditional on other things—such as: civilization not collapsing, etc. Those should he higher.
For example, a chance of cryonics working if we have a Friendly AI, that seems pretty nice.
OK, I think I see your point. You wouldn’t grieve over someone who is incommunicado on a perilous journey, even if you are quite sure you will never hear from them again, even though they might well be dead already. As long as there is a non-zero chance of them being alive, you treat them as such. And you obviously expect cryonics to have a fair chance of success, so you treat cryosuspended people as live.
You wouldn’t grieve over someone who is incommunicado on a perilous journey, even if you are quite sure you will never hear from them again, even though they might well be dead already. As long as there is a non-zero chance of them being alive, you treat them as such. And you obviously expect cryonics to have a fair chance of success, so you treat cryosuspended people as live.
There is an additional component to Eliezer’s comment that I suggest is important. In particular your scenario only mentions the peril of the traveler where Eliezer emphasizes that the traveler is in (approximately) the same amount of danger as he and everyone else is. So the only additional loss is the lack of communication.
Consider an example of the kind of thing that matches your description but that I infer would result in Eliezer experiencing grief: Eliezer has a counter-factual relative of that he loves dearly. The relative isn’t especially rational and either indulges false beliefs or uses a biased and flawed decision making procedure. The biased beliefs or decision making leads the relative to go on an absolutely stupid journey that has a 95% chance of failure and death, and for no particularly good reason. (Maybe climbing Everest despite a medical condition that he is in denial about or something.) In such a case of highly-probable death of a loved one Eliezer could be expected to grieve for the probable pointless death.
The above is very different to if the other person merely ends up on a slightly different perilous path than the one that Eliezer is on himself.
If your estimate of the probability of their eventual revival is p, shouldn’t you feel (1-p) fraction of grief?
Bwuh. That doesn’t seem to add up to normality.
If a loved one who has no intention of ever signing up for life-extension techniques (or suspended animation) departs for a distant country in a final manner with no intention to return or ever contact you again, should you feel 1 grief?
Your system works when one attaches grief to the “currently dead and non-functional” state of a person, but when one attaches it to “mind irrecoverably destroyed such that it will never experience again”, things are different. This will vary very dramatically from person to person, AFAIK.
The caveat here is that whether grief activates or not will depend highly on whether IsFrozen() is closer to IsDead() or to IsSleeping() (or IsOnATrip() or something similar implying prolonged period of no-contact) in the synaptic thought infrastructure* and experience processing of any person’s brain.
If learning of someone being cryo’d fires off more thoughts and memory-patterns in the brain that are more like those fired off when learning of death than like those fired off when learning of sleep / coma / prolongued absence in a faraway country or something, then people will likely feel grief when learning of someone being cryo’d.
* Am I using these terms correctly? I’m not a neuro-anything expert (or even serious amateur), so I might be using words that point at completely different places than where I want, or have no real common/established meaning.
I don’t feel grief when somebody gets cryosuspended. Seriously, I don’t, so far as I can tell. I feel awful when I read about someone who wasn’t cryosuspended.
If your estimate of the probability of their eventual revival is p, shouldn’t you feel (1-p) fraction of grief?
Would that be useful? I expect cryonics to basically work on a technical level. Most of my probability mass for not seeing them again is concentrated in Everett branches where I and the rest of the human species are dead, and for some odd reason that feels like it should make a difference—if somebody goes to Australia for fifty years, are perfectly healthy, and most of my probability mass for not seeing them again is the Earth being wiped out in the meanwhile, I wouldn’t mourn them more than I’d mourn anyone else in danger.
Even given this, I would doubt:
E.g. what about the fact that cryonics organizations have the financial structure of precarious defined-benefit pension plans during a demographic decline and massive population aging, save that those currently receiving pensions can’t complain if they are cut?.
I share and endorse-as-psychologically-healthy your general attitude to grief in this kind of situation. Both the broad principle “Would that be useful?” and the more specific evaluation of the actual loss in expected-deaths with existential considerations in mind. That said, I would suggest that there is in fact reason to mourn more than for anyone else in danger. To the extent that mourning bad things is desirable, in this case you would mourn (1 - p(chance of positive transhumanist future)) * (value of expected life if immediate cause of death wasn’t there).
Compare two universes:
Luna is living contentedly at the age of 150 years. Then someone MESSES WITH TIME, the planet explodes, and so Rocks Fall, Everybody Dies.
Luna dies at 25, is cryopreserved then 125 years later someone MESSES WITH TIME, the planet explodes, and so Rocks Fall, Everybody Dies.
All else being equal I prefer the first universe to the second one. I would pay more Sickles to make the first universe exist than for the second to exist. If I was a person inclined towards mourning, was in such a universe, the temporary death event occurred unexpectedly and I happened to assign p(DOOM)=0.8 then I would mourn the loss 0.8 * (however much I care about Luna’s previously expected 125 years). This is in addition to having a much greater preference that the DOOM doesn’t occur, both for Luna’s sake and everyone else’s.
I agree that the first universe is better, but I’d be way too busy mourning the death of the planet to mourn the interval between those two outcomes if the planet was actually dead. You could call that mental accounting, but isn’t everything?
Makes sense. I was thinking about the chance of cryonics working, generally… but it also makes sense to think about chances of cryonics working conditional on other things—such as: civilization not collapsing, etc. Those should he higher.
For example, a chance of cryonics working if we have a Friendly AI, that seems pretty nice.
OK, I think I see your point. You wouldn’t grieve over someone who is incommunicado on a perilous journey, even if you are quite sure you will never hear from them again, even though they might well be dead already. As long as there is a non-zero chance of them being alive, you treat them as such. And you obviously expect cryonics to have a fair chance of success, so you treat cryosuspended people as live.
There is an additional component to Eliezer’s comment that I suggest is important. In particular your scenario only mentions the peril of the traveler where Eliezer emphasizes that the traveler is in (approximately) the same amount of danger as he and everyone else is. So the only additional loss is the lack of communication.
Consider an example of the kind of thing that matches your description but that I infer would result in Eliezer experiencing grief: Eliezer has a counter-factual relative of that he loves dearly. The relative isn’t especially rational and either indulges false beliefs or uses a biased and flawed decision making procedure. The biased beliefs or decision making leads the relative to go on an absolutely stupid journey that has a 95% chance of failure and death, and for no particularly good reason. (Maybe climbing Everest despite a medical condition that he is in denial about or something.) In such a case of highly-probable death of a loved one Eliezer could be expected to grieve for the probable pointless death.
The above is very different to if the other person merely ends up on a slightly different perilous path than the one that Eliezer is on himself.
I wonder how many religious people have similar experiences, with heaven/hell replacing frozen/dead.
Bwuh. That doesn’t seem to add up to normality.
If a loved one who has no intention of ever signing up for life-extension techniques (or suspended animation) departs for a distant country in a final manner with no intention to return or ever contact you again, should you feel 1 grief?
Your system works when one attaches grief to the “currently dead and non-functional” state of a person, but when one attaches it to “mind irrecoverably destroyed such that it will never experience again”, things are different. This will vary very dramatically from person to person, AFAIK.
I wonder how many theists* feel similarly regarding those they expect to go to heaven/hell.
*Whatever. Not all theists believe in the afterlife described, not all non-theists don’t. You know what I mean.
The caveat here is that whether grief activates or not will depend highly on whether IsFrozen() is closer to IsDead() or to IsSleeping() (or IsOnATrip() or something similar implying prolonged period of no-contact) in the synaptic thought infrastructure* and experience processing of any person’s brain.
If learning of someone being cryo’d fires off more thoughts and memory-patterns in the brain that are more like those fired off when learning of death than like those fired off when learning of sleep / coma / prolongued absence in a faraway country or something, then people will likely feel grief when learning of someone being cryo’d.
* Am I using these terms correctly? I’m not a neuro-anything expert (or even serious amateur), so I might be using words that point at completely different places than where I want, or have no real common/established meaning.