Cryonics priors
I am not currently signed up for cryonics. I am considering it, but have not yet decided whether it is the right choice. Here’s my reasoning.
I am very sure of the following:
1. Life is better than death. For any given finite lifespan, I’d prefer a longer one, at least within the bounds of numbers I can reasonably contemplate.
2. Signing up for cyronics increases the expected value of my lifespan.
But then I also believe the following:
3. I am not particularly exceptional among the set of human beings, and so should not value my lifespan much more than that of other humans. I obviously fail at this in practice, but I think the world would be a much better place if I and others didn’t fail so often.
4. The money it would take to sign up for cryonics, though not large, is enough to buy several centuries of healthy life each year if given to givewell’s top malaria charities. Since on average I expect to live another 50-60 years without cryonics, the investment would need to increase the expected value of my lifespan by at least 5,000 years at minimum to be morally acceptable to me.
5. There is a chance we’ll discover immortality in my lifetime. If so, then if I signed up for cryonics the payout is 0, and the people who died because I bought insurance instead of charity are people I could have saved for far longer.
So, what do you think is the probability that immortality will be discovered in my lifetime? What about the probability that, if signed up for cryonics, I will live into the far future? These priors would seem to be the key for me to decide whether signing up for cryonics is morally acceptable to me.
If one really took this extreme position, why buy cryonics for yourself rather than someone else (e.g. subsidizing those who can almost afford it to produce more suspensions per dollar)?
Really, what you’re asking here is “is paying for cryonic suspensions the most cost-effective known way of purchasing person-neutral QALYs in the entire world?” That’s an extremely implausible position that almost no one defends. Even conditional on thinking cryonics was extraordinarily great, paying for research, e.g. scientific tests of the effectiveness of cryonic suspension and related biology, would be better.
You can read or add to recent previous discussion
This is a common confusion on LW, but it doesn’t make sense. The cost of life insurance is scaled to your risk of death in a covered period. If technology advances reduce your risk of death in a given period then the cost of life insurance drops. If it falls to negligible levels, then just stop paying premiums, or trade in the policy for its cash surrender value.
Consider applying the same logic to fire insurance: if I pay $N for fire insurance each year, future people might invent a perfect fire-suppression system, reducing the risk of my house burning down to zero. Then I would stop buying fire insurance. But that wouldn’t mean the early purchases were foolish: if my house had burned down, future invention of fire-suppression technology wouldn’t have helped.
Your quotation tags are a bit mangled.
No forms of insurance legally obligate you to pay future premiums. If you stop paying premiums the insurance ends.
re: #4: Why must the malaria prevention money come out of your cryonics fund and not your frivolous expense fund? Unless you’re currently donating all of your expendable income, it doesn’t seem like you signing up for cryonics need have any impact on malaria prevention.
I think the moral difference is between Near and Far funds rather than between frivolous expenses and other things.
The Near Fund would be for things you obviously need and/or enjoy, a subset of which is frivolous expenses. These are the funds that you concretely feel the lack of. Food, toys, rent, medicine, etc.
the Far fund is things that seem like a good idea but need more effort and moral commitment to actually act on. Charity, Cryonics, Investing, Education etc. are far mode. These are funds beyond your necessities, that you spend based on complicated decisions about long-term impacts.
Because of how we think and feel about the impacts of these various concerns on our daily happiness etc, it’s much more reasonable to talk about trading off cryonics vs charity than charity vs your daily latte.
Seems like it might be a good explanation of why so many people come up with this objection. I can’t tell if you think this is a bug or a feature; I think it’s a bug.
It doesn’t matter what expense account you’re taking it from—you can always compare two things and say, which is morally correct? If it’s more moral to give to malaria than purchase cryogenics, then you shouldn’t purchase cryogenics and instead give to malaria. If it’s more moral to give to purchase cryogenics instead of making frivolous expenses, then you should purchase cryogenics.
(Also, I don’t know about you, but my frivolous expense fund isn’t nearly big enough to eradicate malaria, much less purchase cryogenics afterward.)
The point is, you should never find yourself in a state where you have enough excess income after your malaria prevention donations but won’t sign up for cryonics because malaria prevention is more important to you.
EDIT: The real point is more that comparing with malaria prevention could be a dodge—you can make any purchase you want look bad via that comparison, so you need to do it consistently or you’re using it to selectively denigrate some ways of spending money.
That’s a bad argument. Just put a number on the chance that we don’t discover immortality in your lifetime and multiply the p value with the expected utility of having a cryonics treatment should you die.
The biggest question seems to be: If you don’t sign up for cryonics, will you actually donate that same amount extra money to givewell’s top malaria charities, above and beyond what you would give anyway? Because the worst of the possible outcomes is that no one lives, neither you, nor the malaria sufferers you could have saved.
Are you claiming that this is a true statement about the world or that this is a description of your preferences? In the former case, I’d like to see an argument with that as the conclusion. In the latter case, I would take the fact that you don’t seem to act as if this your “true” utility function as significant evidence that it is not your “true” utility function.
Assuming that cryonics has a decent chance of working, then the question boils down to what you will actually be spending your money on if not cryonics. There is no point in saving the money that would be spend on cryonics, and instead spending it on vacations to the tropics or something.
Will sacrificing vacation time, or starbucks visits, or high-speed internet, or something similar for cryonics deplete a pool of willpower that would otherwise all be spent on fighting malaria? Will the knowledge that you’re risking your life to fight malaria more effectively make you feel more altruistic, and satisfy your appetite for altruism faster than just donating non-life-saving money to fight malaria, and therefore make you donate less money to fighting malaria overall?
Assuming that:
There continue to be ridiculously cheap lives to be saved over the course of one’s lifetime,
Cryonics does not itself become ridiculously cheap when practiced on a larger scale,
One values lives in a person-neutral way while in a mode of thinking that is not too far-biased to spur significant action,
Cryonics does not increase your total budget for far purchases (e.g. reduced willpower-depletion from resisting luxuries),
The argument for contributing to (or promoting) efficient charity such as GiveWell instead of Cryonics makes sense to me.
However, I don’t think any of the above are all that likely. Here are my counterarguments:
If GiveWell is given more attention and promoted more effectively, human lives will eventually become less ridiculously cheap to save. It’s a simple matter of redirecting money that would otherwise go to less efficient charities. The requirement for this is more advertising/exposure. For the median smart person who spends significant time online and does not have a lot of money, time contributions towards advertising GiveWell and how ridiculously cheap it is to save a life that way may be more effective than equivalent cash contributions (given how relatively little-known it is).
If more people purchase cryonics, cryonics will become less expensive. The chance of cryonics working will also go up because it will be practiced in a higher grade clinical setting (which carries its own costs, but these come out of the normal cost of dying in the first world).
Part of the discomfort that cryonics causes (and one reason I think it is popular on lesswrong) is that it forces you to decompartmentalize between near and far. This is an important rationality skill. Developing it could easily trigger a higher tendency to take action on far issues like donating to GiveWell (or, generally, choosing an efficient charity like GiveWell over a more “fuzzy-optimized” charity).
Cryonics tends to feel like a very selfish choice. Thus money spent on cryonics feels like money spent on a luxury cruise, despite its potential positive externalities (such as making cryonics better/cheaper for everyone else). The part of the brain that insists that so many dollars must go towards selfish things is probably being sated by cryonics costs, leaving more room in the budget for charitable donations when all is said and done (assuming one does not develop the habit of rationalizing selfishness from the exercise).
When I ran the math, the QALY expected value of the two options actually worked out about the same: http://lesswrong.com/lw/6xk/years_saved_cryonics_vs_villagereach/
Given that, it’s basically just a matter of personal preferences and signalling—whichever organization you give money to gains credibility and can use that to help attract even more people.
If you just prefer to live forever instead, the math says that’s TOTALLY OKAY. Do NOT feel selfish about this decision—you’re helping yourself, and you’re helping humanity move towards a rationalist, post-death mindset. Even if cryonics doesn’t work out, the idea of “ending death” is still an incredibly valuable one to have out there.
My personal opinion is that the cryonics will not become acceptable until there are some major scientific breakthroughs that make it more plausible. I’m waiting until then to sign up, which may very well mean I die first. If you believe the signalling is useful, though, then cryonics probably benefits vastly more. Or if you just think immortality is really cool :)
This calculation was badly wrong. You allowed for the possibility that cryonically suspended people would wind up living very long and happy lives through future technology (which provided the great majority of the QALYs), but not for the possibility that African children saved from malaria would, many of whom would live another 60-70 years and could benefit from any big wave of technological change, life extension, etc. If you selectively count the biggest QALY benefits for only some interventions but not others you will get seriously misleading results.
Also it omitted all the compounding effects of more people in Africa over coming decades and centuries.
It’s a back-of-the-envelop calculation on vast unknowns. I wrote it up because it seemed pointless to try making a decision if we weren’t even going to involve numbers. I happily concede that it is deeply speculative.
First, given it’s a back-of-the-envelop calculation, I assume that anything LESS than a 100% difference (2:1 value ratio) can effectively be treated as within the margin of error. So if the ratio I got was 1.5 : 1, I’d still say they were approximately equal. I can’t off-hand defend this intuition, beyond that it’s sloppy math so we have to assume I made at least a few mistakes.
Calculating for your life-extension black swan, the math works out to a ratio of 27X + 1 : 1, where X = the chance of such a radical life extension event (i.e. within the next 50 years, the entire world is effectively immortal). At 4%, that’s about a 2:1 ratio, so the point where I’d call it a significant difference. At 33%, it’s a 10:1 ratio, which is the point where I’d concede it’s clearly the correct decision. I personally assume this is < 1%, which means it doesn’t affect the result.
(Math note: 27X+1 is because of the 28:1 cost ratio with cryonics. If the black swan occurs, we’re 28 times more efficient. If it doesn’t occur, our original equation says we’re still equally efficient. Thus we get a ratio of (28 X) + (1 (1-X)) : 1, which simplifies to 27X+1 : 1.)
(Stylistic note: For back of the envelope calculations, as you can see, even astounding events like “radical life extension” require fairly solid odds before they affect things. We can reasonably ignore anything 1% or less, since there’s probably other black swans pointed the other direction, and we need at least a 2:1 difference before we stop calling it “approximately equal” :))
As to compounding effects, well… yeah, I have to concede that QALY doesn’t cover that. If you have research that says this should specifically bias our calculations one way or another, that’s useful information. Otherwise I’d just have to conclude insufficient information, or assume that any given QALY compounds approximately identically to any other QALY.
From here, while you may well still disagree with me, I think my methods and assumptions are open enough that you can just plug in your own numbers and get your own results.
Feel free to post your own revised estimates if you do disagree, but I’d appreciate you actually running the numbers first—it’s much easier to discuss if I can tell we disagree because you think the black swan has 25% odds and I think it’s <1%.
It’s also a good way to notice how even radical black swans still require decent odds before they affect the calculations at all, which is important to understand when using a tool like this (I still sometimes find it surprising myself! Which is why I like doing these :))
Cryonics working requires most of the same “black swan,” and providing large lifespans is heavily correlated with large lifespans for ordinary people. The chance of cryonic organizations failing increases with time (among other things, their financials are rickety, and there have been past failures), so the much of the chance of cryonics working depends on big technological advances happening within a century.
I would say that conditional on cryonics working (which is a major thing to condition on), the chance of the “black swan” (which I would define more precisely) is over 10%, which is enough for it to do better on that front. And the black swan can occur even if the cryopreservation process doesn’t store the key information, which would further increase its advantage.
If we assume cryonics requires advances within the century, it’s still true that those advances are more likely to come LATER than sooner. Cryonics means I survive whether the advance comes tomorrow or the day before the company would have thrown in the towel.
So the odds still favor the cryonaut over the African kids, because the cryonaut has longer for that advance to occur. Also, the cryonaut is someone who has the resources and culture to invest in a long-shot like cryonics DESPITE it being unpopular and fringe, whereas the African kids are unlikely to do any such thing. The African Kids only survive if there’s a massive world-wide change like the Singularity or Friendly AI.
Keep in mind, we live in a society where millions die of starvation simply because we’re inefficient at distributing food—we have enough to go around, it just doesn’t end up where it’s needed. We’re talking a VERY radical change, and it needs to be before most of the kids are already dead (if it happens in exactly 60 years, statistically most of the kids are probably already dead)
Thanks for posing it in this form—this has been my line of thinking for some time. I agree that it is laudable to weigh other’s lives as equal to your own, here.
Personally, though I haven’t looked up enough statistics to do the math, my initial approximation says I think supporting charities that fight malaria (etc) is better. My reasoning:
Lives saved now are not just a scalar amount added to the “total number of quality years lived by human beings added across all time”—they are also an investment in the future. Fighting malaria will help speed industrialization (etc) in countries that do not yet contribute as much to research, etc.
In general, most human beings leave a positive impact on society. They invest in their children, perform research, build cities, etc. They give birth to more children, which results in more humans alive when immortality does become reality.
5 is probably the main reason why I don’t sign up (if you don’t count the fact that the nearest cryonics facility is several thousand miles away from where I am).
“1. Life is better than death. For any given finite lifespan, I’d prefer a longer one, at least within the bounds of numbers I can reasonably contemplate.”
Have you included estimates of possible negative utilities? One thing we can count on is that if you are revived you will certainly be at the mercy of whatever revived you. How do you estimate the probability that what wakes you will be friendly? Is the chance at eternal life worth the risk of eternal suffering?
I don’t see the point of extending the lives of people in chronically dysfunctional societies which can’t solve their own malaria problems. For one thing, the boys who survive could just wind up as soldiers for the local warlord and make other people’s lives less desirable as as consequence. For example, they might wind up raping the young women who survive.
Sorry, this shows seriously confused thinking. You can’t test the effectiveness of “life extension,” “anti-aging” and “immortality” treatments on humans any faster than the rate at which humans happen to live. We can conduct experiments to create mice which arguably live the muscine equivalent of 1,000 human years. But we know the results of these experiments because the experimental populations have already died with recorded birth and death dates for each individual mouse. But we can’t extrapolate this to humans because you can’t tell if someone can live 1,000 years until someone lives 1,000 years, which means we will won’t have that knowledge any faster than it can arrive.