If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments.
I do not wish to damage the ozone layer or contribute to global warming.
I think the resources should be spent on medical care for the young, rather than for the old. Do you know how many lives lost to measles one corpsicle costs?
If I am awakened in the future, I have no way to earn a living.
I used to like Larry Niven’s sci fi.
Yes, these answers are somewhat flip. But …
I can easily imagine someone rational signing up for cryonics. What I have more trouble imagining is someone rational becoming evangelical on the topic. Surely there are lives easier and cheaper to save than mine. Why is it important to you to convince me on this? Why aren’t you asking me to contribute to Doctors without Borders? Are you perhaps seeking validation of your own life choices?
Economies of scale mean that increasing numbers of cryonics users lower costs and improve revival chances. I would class this with disease activism, e.g. patients (and families of patients) with a particular cancer collectively organizing to fund and assist research into their disease. It’s not a radically impartial altruist motivation, but it is a moral response to a coordination/collective action problem.
Yes, that makes sense. Though that kind of thinking does not motivate me to go door-to-door every Saturday trying to convince my neighbors to buy more science books.
You value all lives equally, with no additional preference to your own?
If you suddenly fell ill with a disease which is curable, but is very expensive, would you refuse treatment to save “lives easier and cheaper to save than” your own?
Naturally, insurance may cover said expensive treatment, but it can also cover cryonics. Do you only believe in insurance with reasonable caps on cost, such that your medical expenses can never be more than average?
You value all lives equally, with no additional preference to your own?
No, in fact I am probably over to the egoist side of the spectrum among LWers. I said my answers were somewhat flip.
My moral intuitions are pretty close to “Do unto others as they do unto you” except that there is a uni-directional inter-generational flow superimposed. I draw my hope of immortality from children, nephews, nieces, etc.
Do you only believe in insurance with reasonable caps on cost, such that your medical expenses can never be more than average?
I favor payment caps and co-pays on all medical insurance, whether I pay through premiums or taxes. That is only common sense. But capping at everybody-gets-exactly-the-average kinda defeats the purpose of an insurance scheme, doesn’t it?
Do you know how many lives lost to measles one corpsicle costs?
That doesn’t make it obvious whether it’s worth it though. All those people with measles were going to die anyway, after all. Saving a few people for billions of years sounds much better than saving thousands of people for dozens of years.
Saving a few people for billions of years sounds much better than saving thousands of people for dozens of years.
Whether that is true depends on the discount rate. I suspect that with reasonable discount rates of, say, 1% per annum, the calculation would come out in favor of saving the thousands.
To say nothing of the fact that those thousands saved, after leading full and productive lives, may choose to apply their own savings to either personal immortality or saving additional thousands.
Interesting distinction—I hadn’t yet realized its importance.
Subjective time seems to be the one to be used in discounting values. If I remain frozen for 1000 sidereal years, there is no subjective time passed, so no discounting. If I then remain alive physically for 72 years on both scales, I am then living years worth only half as much as base-line years. If I am then uploaded, further year-counting and discounting uses subjective time, not sidereal time.
Thx for pointing this out.
I don’t see how being simulated on fast hardware changes the psychological fact of discounting future experience, but it may well mean that I need a larger endowment, collecting interest in sidereal time, in order to pay my subjective-monthly cable TV bills.
Note that this distinction affords ways to care more or less about the far future: go into cryo, or greatly slow down your upload runspeed, and suddenly future rewards matter much more. So if the technology exists you should manipulate your subjective time to get the best discounted rewards.
Very interesting topic. People with a low uploaded run speed should be more willing to lend money (at sidereally calculated interest rates) and less willing to borrow than people with high uploaded run speeds. So people who run fast will probably be in hock to the people who run slow. But that is ok because they can probably earn more income. They can afford to make the interest payments.
Physical mankind, being subjectively slower than uploaded mankind and the pure AIs, will not be able to compete intellectually, but will survive by collecting interest payments from the more productive members of this thoroughly mixed economy.
But even without considering AIs and uploading, there is enough variation in discount rates between people here on earth to make a difference—a difference that may be more important to relative success than is the difference in IQs. People with low discount rates, that is people with a high tolerance for delayed gratification, are naturally seen as more trustworthy than are their more short-term-focused compatriots. People with high discount rates tend to max out their credit cards, and inevitably find themselves in debt to those with low discount rates.
One could write several top level posts on this general subject area.
I don’t think there will be lending directly between entities with very different run speeds. If you’re much slower, you can’t keep track of who’s worth lending to, and if you’re much faster, you don’t have the patience for slow deliberation. There might well be layers of lenders transferring money(?) between speed zones.
Almost on topic:Slow Tuesday Night by R.A. Lafferty. Recommended if you’d like a little light-hearted transhumanism with casual world-building.
Actually, people probably use sidereal time in fact, not subjective time, and this is a good explanation for why people aren’t interested in their post-cyronics self; because it is discounted according to all the time while they are frozen.
A portion of the discounting that’s due to unpredictability does not change with your subjective runspeed. If you’re dividing utilons between present you, and you after a million years in cryofreeze, you should use a large discount, due to the likelihood that your plant or your civilization will not survive a million years of cryofreeze, or that the future world will be hostile or undesirable.
I think we’re talking about pure time preference here. Turning risk of death into a discount rate rather than treating it using probabilities and timelines (ordinary risk analysis) introduces weird distortions, and doesn’t give a steady discount rate.
But maybe discount rate is just a way of estimating all of the risks associated with time passing. Is there any discounting left if you remove all risk analysis from discounting?
Time discounting is something that evolution taught us to do; so we don’t know for certain why we do it.
Certainly time discounting is something that evolution taught us to do. However, it is adjusting for more than risks. $100 now is worth strictly more than $100 later, because now I can do a strict superset of what I can do with it later (namely, spend it on anything between now and then), as well as hold on to it and turn it into $100 later.
$100 now is worth strictly more than $100 later, because now I can do a strict superset of what I can do with it later (namely, spend it on anything between now and then), as well as hold on to it and turn it into $100 later.
There could be Schellingesque reasons to wish to lack money during a certain time. For example, suppose you can have a debt forgiven iff you can prove that you have no money at a certain time; then you don’t want to have money at that time, but you would still benefit from acquiring the money later.
Yes, time discounting isn’t just about risk, so that was a bit silly of me. I would have an advantage in chess if I could make all my moves before you made any of yours.
Yes. As I recall, Niven described a future in which people were generally more interested in acquiring a license to have children than in acquiring a license to thaw a frozen ancestor.
There were a couple of books where a person was revived into a fairly dystopian situation—I forget their names right now. The term “corpsicle” is Niven’s.
I do not wish to damage the ozone layer or contribute to global warming.
I think the resources should be spent on medical care for the young, rather than for the old. Do you know how many lives lost to measles one corpsicle costs?
If I am awakened in the future, I have no way to earn a living.
I used to like Larry Niven’s sci fi.
Yes, these answers are somewhat flip. But …
I can easily imagine someone rational signing up for cryonics. What I have more trouble imagining is someone rational becoming evangelical on the topic. Surely there are lives easier and cheaper to save than mine. Why is it important to you to convince me on this? Why aren’t you asking me to contribute to Doctors without Borders? Are you perhaps seeking validation of your own life choices?
Economies of scale mean that increasing numbers of cryonics users lower costs and improve revival chances. I would class this with disease activism, e.g. patients (and families of patients) with a particular cancer collectively organizing to fund and assist research into their disease. It’s not a radically impartial altruist motivation, but it is a moral response to a coordination/collective action problem.
Yes, that makes sense. Though that kind of thinking does not motivate me to go door-to-door every Saturday trying to convince my neighbors to buy more science books.
You value all lives equally, with no additional preference to your own?
If you suddenly fell ill with a disease which is curable, but is very expensive, would you refuse treatment to save “lives easier and cheaper to save than” your own?
Naturally, insurance may cover said expensive treatment, but it can also cover cryonics. Do you only believe in insurance with reasonable caps on cost, such that your medical expenses can never be more than average?
No, in fact I am probably over to the egoist side of the spectrum among LWers. I said my answers were somewhat flip.
My moral intuitions are pretty close to “Do unto others as they do unto you” except that there is a uni-directional inter-generational flow superimposed. I draw my hope of immortality from children, nephews, nieces, etc.
I favor payment caps and co-pays on all medical insurance, whether I pay through premiums or taxes. That is only common sense. But capping at everybody-gets-exactly-the-average kinda defeats the purpose of an insurance scheme, doesn’t it?
That doesn’t make it obvious whether it’s worth it though. All those people with measles were going to die anyway, after all. Saving a few people for billions of years sounds much better than saving thousands of people for dozens of years.
Whether that is true depends on the discount rate. I suspect that with reasonable discount rates of, say, 1% per annum, the calculation would come out in favor of saving the thousands.
To say nothing of the fact that those thousands saved, after leading full and productive lives, may choose to apply their own savings to either personal immortality or saving additional thousands.
By sidereal or subjective time? If the former, running minds on faster hardware can evade most of the discounting losses.
Interesting distinction—I hadn’t yet realized its importance.
Subjective time seems to be the one to be used in discounting values. If I remain frozen for 1000 sidereal years, there is no subjective time passed, so no discounting. If I then remain alive physically for 72 years on both scales, I am then living years worth only half as much as base-line years. If I am then uploaded, further year-counting and discounting uses subjective time, not sidereal time.
Thx for pointing this out.
I don’t see how being simulated on fast hardware changes the psychological fact of discounting future experience, but it may well mean that I need a larger endowment, collecting interest in sidereal time, in order to pay my subjective-monthly cable TV bills.
Note that this distinction affords ways to care more or less about the far future: go into cryo, or greatly slow down your upload runspeed, and suddenly future rewards matter much more. So if the technology exists you should manipulate your subjective time to get the best discounted rewards.
Very interesting topic. People with a low uploaded run speed should be more willing to lend money (at sidereally calculated interest rates) and less willing to borrow than people with high uploaded run speeds. So people who run fast will probably be in hock to the people who run slow. But that is ok because they can probably earn more income. They can afford to make the interest payments.
Physical mankind, being subjectively slower than uploaded mankind and the pure AIs, will not be able to compete intellectually, but will survive by collecting interest payments from the more productive members of this thoroughly mixed economy.
But even without considering AIs and uploading, there is enough variation in discount rates between people here on earth to make a difference—a difference that may be more important to relative success than is the difference in IQs. People with low discount rates, that is people with a high tolerance for delayed gratification, are naturally seen as more trustworthy than are their more short-term-focused compatriots. People with high discount rates tend to max out their credit cards, and inevitably find themselves in debt to those with low discount rates.
One could write several top level posts on this general subject area.
I don’t think there will be lending directly between entities with very different run speeds. If you’re much slower, you can’t keep track of who’s worth lending to, and if you’re much faster, you don’t have the patience for slow deliberation. There might well be layers of lenders transferring money(?) between speed zones.
Almost on topic:Slow Tuesday Night by R.A. Lafferty. Recommended if you’d like a little light-hearted transhumanism with casual world-building.
Actually, people probably use sidereal time in fact, not subjective time, and this is a good explanation for why people aren’t interested in their post-cyronics self; because it is discounted according to all the time while they are frozen.
A portion of the discounting that’s due to unpredictability does not change with your subjective runspeed. If you’re dividing utilons between present you, and you after a million years in cryofreeze, you should use a large discount, due to the likelihood that your plant or your civilization will not survive a million years of cryofreeze, or that the future world will be hostile or undesirable.
I think we’re talking about pure time preference here. Turning risk of death into a discount rate rather than treating it using probabilities and timelines (ordinary risk analysis) introduces weird distortions, and doesn’t give a steady discount rate.
But maybe discount rate is just a way of estimating all of the risks associated with time passing. Is there any discounting left if you remove all risk analysis from discounting?
Time discounting is something that evolution taught us to do; so we don’t know for certain why we do it.
Certainly time discounting is something that evolution taught us to do. However, it is adjusting for more than risks. $100 now is worth strictly more than $100 later, because now I can do a strict superset of what I can do with it later (namely, spend it on anything between now and then), as well as hold on to it and turn it into $100 later.
There could be Schellingesque reasons to wish to lack money during a certain time. For example, suppose you can have a debt forgiven iff you can prove that you have no money at a certain time; then you don’t want to have money at that time, but you would still benefit from acquiring the money later.
Yes, time discounting isn’t just about risk, so that was a bit silly of me. I would have an advantage in chess if I could make all my moves before you made any of yours.
What’s the connection to Niven? His portrayal of revival as a bad deal?
Yes. As I recall, Niven described a future in which people were generally more interested in acquiring a license to have children than in acquiring a license to thaw a frozen ancestor.
There were a couple of books where a person was revived into a fairly dystopian situation—I forget their names right now. The term “corpsicle” is Niven’s.