Considering what we do value about life—immediate connections, attachments and interactions, it makes much more sense to invest in figuring out technology to increase lifespan and prevent accidental death.
That may be what you value—but how do you know whether that applies to me?
The ‘we’ population I was referring to was deliberately vague. I don’t know how many people have values as described, or what fraction of people who have thought about cryonics and don’t choose cryonics this would account for. My main point, all along, is that whether cryonics is the “correct” choice depends on your values.
Anti-cryonics “values” can sometimes be easily criticized as rationalizations or baseless religious objections. (‘Death is natural’, for example.) However, this doesn’t mean that a person couldn’t have true anti-cryonics values (even very similar-sounding ones).
Value-wise, I don’t know even whether cryonics is the correct choice for much more than half or much less than half of all persons, but given all the variation in people, I’m pretty sure it’s going to be the right choice for at least a handful and the wrong choice for at least a handful.
My main point, all along, is that whether cryonics is the “correct” choice depends on your values.
Sure. If you don’t value your life that much, then cryo is not for you, but I think that many people who refuse cryo don’t say “I don’t care if I die, my life is worthless to me”, and if they were put in a near-mode situation where many of their close friends and relatives had died, but they had the option to make a new start in a society of unprecedentedly high quality of life, they wouldn’t choose to die instead.
Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company? She would have to be at an extreme of low-self value to say “my life without my medieval peasant friends was the only thing that mattered to me”, and turn down the opportunity to live a new life of learning, comfort and life in the absence of constant pain and hunger.
Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company?
This is another issue where, in my view, pro-cryonics people often make unwarranted assumptions. They imagine a future with a level of technology sufficient to revive frozen people, and assume that this will probably mean a great increase in per-capita wealth and comfort, like today’s developed world compared to primitive societies, only even more splendid. Yet I see no grounds at all for such a conclusion.
What I find much more plausible are the Malthusian scenarios of the sort predicted by Robin Hanson. If technology becomes advanced enough to revive frozen brains in some way, it probably means that it will be also advanced enough to create and copy artificial intelligent minds and dexterous robots for a very cheap price. [Edit to avoid misunderstanding: the remainder of the comment is inspired by Hanson’s vision, but based on my speculation, not a reflection of his views.]
This seems to imply a Malthusian world where selling labor commands only the most meager subsistence necessary to keep the cheapest artificial mind running, and biological humans are out-competed out of existence altogether. I’m not at all sure I’d like to wake up in such a world, even if rich—and I also see some highly questionable assumptions in the plans of people who expect that they can simply leave a posthumous investment, let the interest accumulate while they’re frozen, and be revived rich. Even if your investments remain safe and grow at an immense rate, which is itself questionable, the price of lifestyle that would be considered tolerable by today’s human standards may well grow even more rapidly as the Malthusian scenario unfolds.
The honest answer to this question is that it is possible that you’ll get revived into a world that is not worth living in, in which case you can go for suicide.
And then there’s a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.
And then there’s a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.
That’s a risk for regular death, too, albeit a very unlikely one. This possibility seems like Pascal’s wager with a minus sign.
That said, I am nowhere near certain that bad future awaits us, nor that the above mentioned Malthusian scenario is inevitable. However, it does seem to me as the most plausible course of affairs given a cheap technology for making and copying minds, and it seems reasonable to expect that such technology would follow from more or less the same breakthroughs that would be necessary to revive people from cryonics.
I think that we wouldn’t actually end up in a malthusian regime—we’d coordinate so that that didn’t happen. Especially compelling is the fact that in these regimes of high copy fidelity, you could end up with upload “clans” that acted as one decision-theoretic entity, and would quickly gobble up lone uploads by the power that their cooperation gave them.
the price of lifestyle that would be considered tolerable by today’s human standards may well grow even more rapidly as the Malthusian scenario unfolds.
I think that this is the exact opposite of what Robin predicts, he predicts that if the economy grows at a faster rate because of ems, the best strategy for a human is to hold investments, which would make you fabulously rich in a very short time.
That is true—my comment was worded badly and open to misreading on this point. What I meant is that I agree with Hanson that ems likely imply a Malthusian scenario, but I’m skeptical of the feasibility of the investment strategy, unless it involves ditching the biological body altogether and identifying yourself with a future em, in which case you (or “you”?) might feasibly end up as a wealthy em. (From Hanson’s writing I’ve seen, it isn’t clear to me if he automatically assumes the latter, or if he actually believes that biological survival might be an option for prudent investors.)
The reason is that in a Malthusian world of cheap AIs, it seems to me that the prices of resources necessary to keep biological humans alive would far outrun any returns on investments, no matter how extraordinary they might be. Moreover, I’m also skeptical if humans could realistically expect their property rights to be respected in a Malthusian world populated by countless numbers of far more intelligent entities.
Suppose that my biological survival today costs 2000 MJ of energy per year and 5000kg of matter. Since I can spend (say) $50,000 today to buy 10,000 MJ of energy and 5000kg of matter. I invest my $50,000 and get cryo. Then, the em revolution happens, and the price of these commodities becomes very high, at the same time as the economy (total amount of wealth) grows, at say 100% per week, corrected for inflation.
That means that every week, my 10000 MJ of energy and 5000kg of matter investment becomes twice as valuable, so after one week, I own 20,000MJ of energy and 10,000kg of matter. Though, at the same time, the dollar price of these commodities has also increased a lot.
The end result: I get very very large amounts of energy/matter very quickly, limited only by the speed of light limit of how quickly earth-based civilization can grow.
The above all assumes preservation of property rights.
That means that every week, my 10000 MJ of energy and 5000kg of matter investment becomes twice as valuable, so after one week, I own 20,000MJ of energy and 10,000kg of matter. Though, at the same time, the dollar price of these commodities has also increased a lot.
This is a fallacious step. The fact that risk-free return on investment over a certain period is X% above inflation does not mean that you can pick any arbitrary thing and expect that if you can afford a quantity Y of it today, you’ll be able to afford (1+X/100)Y of it after that period. It merely means that if you’re wealthy enough today to afford a particular well-defined basket of goods—whose contents are selected by convention as a necessary part of defining inflation, and may correspond to your personal needs and wants completely, partly, or not at all—then investing your present wealth will get you the power to purchase a similar basket (1+X/100) times larger after that period. [*] When it comes to any particular good, the ratio can be in any direction—even assuming a perfect laissez-faire market, let alone all sorts of market-distorting things that may happen.
Therefore, if you have peculiar needs and wants that don’t correspond very well to the standard basket used to define the price index, then the inflation and growth numbers calculated using this basket are meaningless for all your practical purposes. Trouble is, in an economy populated primarily by ems, biological humans will be such outliers. It’s enough that one factor critical for human survival gets bid up exorbitantly and it’s adios amigos. I can easily think of more than one candidate.
The above all assumes preservation of property rights.
From the perspective of an em barely scraping a virtual or robotic existence, a surviving human wealthy enough to keep their biological body alive would seem as if, from our perspective, a whole rich continent’s worth of land, capital, and resources was owned by a being whose mind is so limited and slow that it takes a year to do one second’s worth of human thinking, while we toil 24⁄7, barely able to make ends meet. I don’t know with how much confidence we should expect that property rights would be stable in such a situation.
[*] - To be precise, the contents of the basket will also change during that period if it’s of any significant length. This however gets us into the nebulous realm of Fisher’s chain indexes and similar numerological tricks on which the dubious edifice of macroeconomic statistics rests to a large degree.
If the growth above inflation isn’t defined in terms of today’s standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs.
I.e. Robin’s prediction of fast growth rates is presumably in terms of today’s basket of goods, right?
The point of ems is that they will do work that is useful by today’s standard, rather than just creating a multiplicity of some (by our standard) useless commodity like digits of pi that they then consume.
If the growth above inflation isn’t defined in terms of today’s standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs.
You’re asking some very good questions indeed! Now think about it a bit more.
Even nowadays, you simply cannot maintain the exact same basket of goods as the standard for any period much longer than a year or so. Old things are no longer produced, and more modern equivalents will (and sometimes won’t) replace them. New things appear that become part of the consumption basket of a typical person, often starting as luxury but gradually becoming necessary to live as a normal, well-adjusted member of society. Certain things are no longer available simply because the world has changed to the point where their existence is no longer physically or logically possible. So what sense does it make to compare the “price index” between 2010 and 1950, let alone 1900, and express this ratio as some exact and unique number?
The answer is that it doesn’t make any sense. What happens is that government economists define new standard baskets each year, using formalized and complex, but ultimately completely arbitrary criteria for selecting their composition and determining the “real value” of new goods and services relative to the old. Those estimates are then chained to make comparisons between more distant epochs. While this does make some limited sense for short-term comparisons, in the long run, these numbers are devoid of any sensible meaning.
Not to even mention how much the whole thing is a subject of large political and bureaucratic pressures. For example, in 1996, the relevant bodies of the U.S. government concluded that the official inflation figures were making the social security payments grow too fast for their taste, so they promptly summoned a committee of experts, who then produced an elaborate argument that the methodology hitherto used had unsoundly overstated the growth in CPI relative to some phantom “true” value. And so the methodology was revised, and inflation obediently went down. (I wouldn’t be surprised if the new CPI math indeed gives much more prominence to the cost of sending emails!)
Now, if such is the state of things even when it comes to the fairly slow technological and economic changes undertaken by humans in recent decades, what sense does it make to project these numbers into an em-based economy that develops and changes at a speed hardly imaginable for us today, and whose production is largely aimed at creatures altogether different from us? Hardly any, I would say, which is why I don’t find the attempts to talk about long-term “real growth” as a well-defined number meaningful.
I.e. Robin’s prediction of fast growth rates is presumably in terms of today’s basket of goods, right?
I don’t know what he thinks about how affordable biological human life would be in an em economy, but I’m pretty sure he doesn’t define his growth numbers tied to the current CPI basket. From the attitudes he typically displays in his writing, I would be surprised if he would treat things valued by ems and other AIs as essentially different from things valued by humans and unworthy of inclusion into the growth figures, even if humans find them irrelevant or even outright undesirable.
I don’t think it’s a matter of whether you value your life but why. We don’t value life unconditionally (say, just a metabolism, or just having consciousness—both would be considered useless).
if they were put in a near-mode situation where many of their close friends and relatives had died, but they had the option to make a new start in a society of unprecedentedly high quality of life, they wouldn’t choose to die instead.
I wouldn’t expect anyone to choose to die, no, but I would predict some people would be depressed if everyone they cared about died and would not be too concerned about whether they lived or not. [I’ll add that the truth of this depends upon personality and generational age.]
Regarding the medieval peasant, I would expect her to accept the offer but I don’t think she would be irrational for refusing. In fact, if she refused, I would just decide she was a very incurious person and she couldn’t think of anything special to bring to the future (like her religion or a type of music she felt passionate about.) But I don’t think lacking curiosity or any goals for the far impersonal future is having low self-esteem. [Later, I’m adding that if she decided not to take the offer, I would fear she was doing so due to a transient lack of goals. I would rather she had made her decision when all was well.]
(If it was free, I definitely would take the offer and feel like I had a great bargain. I wonder if I can estimate how much I would pay for a cryopreservation that was certain to work? I think $10 to $50 thousand, in the case of no one I knew coming with me, but it’s difficult to estimate.)
This comment may be a case of other-optimizing
e.g.
That may be what you value—but how do you know whether that applies to me?
The ‘we’ population I was referring to was deliberately vague. I don’t know how many people have values as described, or what fraction of people who have thought about cryonics and don’t choose cryonics this would account for. My main point, all along, is that whether cryonics is the “correct” choice depends on your values.
Anti-cryonics “values” can sometimes be easily criticized as rationalizations or baseless religious objections. (‘Death is natural’, for example.) However, this doesn’t mean that a person couldn’t have true anti-cryonics values (even very similar-sounding ones).
Value-wise, I don’t know even whether cryonics is the correct choice for much more than half or much less than half of all persons, but given all the variation in people, I’m pretty sure it’s going to be the right choice for at least a handful and the wrong choice for at least a handful.
Sure. If you don’t value your life that much, then cryo is not for you, but I think that many people who refuse cryo don’t say “I don’t care if I die, my life is worthless to me”, and if they were put in a near-mode situation where many of their close friends and relatives had died, but they had the option to make a new start in a society of unprecedentedly high quality of life, they wouldn’t choose to die instead.
Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company? She would have to be at an extreme of low-self value to say “my life without my medieval peasant friends was the only thing that mattered to me”, and turn down the opportunity to live a new life of learning, comfort and life in the absence of constant pain and hunger.
Roko:
This is another issue where, in my view, pro-cryonics people often make unwarranted assumptions. They imagine a future with a level of technology sufficient to revive frozen people, and assume that this will probably mean a great increase in per-capita wealth and comfort, like today’s developed world compared to primitive societies, only even more splendid. Yet I see no grounds at all for such a conclusion.
What I find much more plausible are the Malthusian scenarios of the sort predicted by Robin Hanson. If technology becomes advanced enough to revive frozen brains in some way, it probably means that it will be also advanced enough to create and copy artificial intelligent minds and dexterous robots for a very cheap price. [Edit to avoid misunderstanding: the remainder of the comment is inspired by Hanson’s vision, but based on my speculation, not a reflection of his views.]
This seems to imply a Malthusian world where selling labor commands only the most meager subsistence necessary to keep the cheapest artificial mind running, and biological humans are out-competed out of existence altogether. I’m not at all sure I’d like to wake up in such a world, even if rich—and I also see some highly questionable assumptions in the plans of people who expect that they can simply leave a posthumous investment, let the interest accumulate while they’re frozen, and be revived rich. Even if your investments remain safe and grow at an immense rate, which is itself questionable, the price of lifestyle that would be considered tolerable by today’s human standards may well grow even more rapidly as the Malthusian scenario unfolds.
The honest answer to this question is that it is possible that you’ll get revived into a world that is not worth living in, in which case you can go for suicide.
And then there’s a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.
That’s a risk for regular death, too, albeit a very unlikely one. This possibility seems like Pascal’s wager with a minus sign.
That said, I am nowhere near certain that bad future awaits us, nor that the above mentioned Malthusian scenario is inevitable. However, it does seem to me as the most plausible course of affairs given a cheap technology for making and copying minds, and it seems reasonable to expect that such technology would follow from more or less the same breakthroughs that would be necessary to revive people from cryonics.
I think that we wouldn’t actually end up in a malthusian regime—we’d coordinate so that that didn’t happen. Especially compelling is the fact that in these regimes of high copy fidelity, you could end up with upload “clans” that acted as one decision-theoretic entity, and would quickly gobble up lone uploads by the power that their cooperation gave them.
I think that this is the exact opposite of what Robin predicts, he predicts that if the economy grows at a faster rate because of ems, the best strategy for a human is to hold investments, which would make you fabulously rich in a very short time.
That is true—my comment was worded badly and open to misreading on this point. What I meant is that I agree with Hanson that ems likely imply a Malthusian scenario, but I’m skeptical of the feasibility of the investment strategy, unless it involves ditching the biological body altogether and identifying yourself with a future em, in which case you (or “you”?) might feasibly end up as a wealthy em. (From Hanson’s writing I’ve seen, it isn’t clear to me if he automatically assumes the latter, or if he actually believes that biological survival might be an option for prudent investors.)
The reason is that in a Malthusian world of cheap AIs, it seems to me that the prices of resources necessary to keep biological humans alive would far outrun any returns on investments, no matter how extraordinary they might be. Moreover, I’m also skeptical if humans could realistically expect their property rights to be respected in a Malthusian world populated by countless numbers of far more intelligent entities.
Suppose that my biological survival today costs 2000 MJ of energy per year and 5000kg of matter. Since I can spend (say) $50,000 today to buy 10,000 MJ of energy and 5000kg of matter. I invest my $50,000 and get cryo. Then, the em revolution happens, and the price of these commodities becomes very high, at the same time as the economy (total amount of wealth) grows, at say 100% per week, corrected for inflation.
That means that every week, my 10000 MJ of energy and 5000kg of matter investment becomes twice as valuable, so after one week, I own 20,000MJ of energy and 10,000kg of matter. Though, at the same time, the dollar price of these commodities has also increased a lot.
The end result: I get very very large amounts of energy/matter very quickly, limited only by the speed of light limit of how quickly earth-based civilization can grow.
The above all assumes preservation of property rights.
Roko:
This is a fallacious step. The fact that risk-free return on investment over a certain period is X% above inflation does not mean that you can pick any arbitrary thing and expect that if you can afford a quantity Y of it today, you’ll be able to afford (1+X/100)Y of it after that period. It merely means that if you’re wealthy enough today to afford a particular well-defined basket of goods—whose contents are selected by convention as a necessary part of defining inflation, and may correspond to your personal needs and wants completely, partly, or not at all—then investing your present wealth will get you the power to purchase a similar basket (1+X/100) times larger after that period. [*] When it comes to any particular good, the ratio can be in any direction—even assuming a perfect laissez-faire market, let alone all sorts of market-distorting things that may happen.
Therefore, if you have peculiar needs and wants that don’t correspond very well to the standard basket used to define the price index, then the inflation and growth numbers calculated using this basket are meaningless for all your practical purposes. Trouble is, in an economy populated primarily by ems, biological humans will be such outliers. It’s enough that one factor critical for human survival gets bid up exorbitantly and it’s adios amigos. I can easily think of more than one candidate.
From the perspective of an em barely scraping a virtual or robotic existence, a surviving human wealthy enough to keep their biological body alive would seem as if, from our perspective, a whole rich continent’s worth of land, capital, and resources was owned by a being whose mind is so limited and slow that it takes a year to do one second’s worth of human thinking, while we toil 24⁄7, barely able to make ends meet. I don’t know with how much confidence we should expect that property rights would be stable in such a situation.
[*] - To be precise, the contents of the basket will also change during that period if it’s of any significant length. This however gets us into the nebulous realm of Fisher’s chain indexes and similar numerological tricks on which the dubious edifice of macroeconomic statistics rests to a large degree.
If the growth above inflation isn’t defined in terms of today’s standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs.
I.e. Robin’s prediction of fast growth rates is presumably in terms of today’s basket of goods, right?
The point of ems is that they will do work that is useful by today’s standard, rather than just creating a multiplicity of some (by our standard) useless commodity like digits of pi that they then consume.
Roko:
You’re asking some very good questions indeed! Now think about it a bit more.
Even nowadays, you simply cannot maintain the exact same basket of goods as the standard for any period much longer than a year or so. Old things are no longer produced, and more modern equivalents will (and sometimes won’t) replace them. New things appear that become part of the consumption basket of a typical person, often starting as luxury but gradually becoming necessary to live as a normal, well-adjusted member of society. Certain things are no longer available simply because the world has changed to the point where their existence is no longer physically or logically possible. So what sense does it make to compare the “price index” between 2010 and 1950, let alone 1900, and express this ratio as some exact and unique number?
The answer is that it doesn’t make any sense. What happens is that government economists define new standard baskets each year, using formalized and complex, but ultimately completely arbitrary criteria for selecting their composition and determining the “real value” of new goods and services relative to the old. Those estimates are then chained to make comparisons between more distant epochs. While this does make some limited sense for short-term comparisons, in the long run, these numbers are devoid of any sensible meaning.
Not to even mention how much the whole thing is a subject of large political and bureaucratic pressures. For example, in 1996, the relevant bodies of the U.S. government concluded that the official inflation figures were making the social security payments grow too fast for their taste, so they promptly summoned a committee of experts, who then produced an elaborate argument that the methodology hitherto used had unsoundly overstated the growth in CPI relative to some phantom “true” value. And so the methodology was revised, and inflation obediently went down. (I wouldn’t be surprised if the new CPI math indeed gives much more prominence to the cost of sending emails!)
Now, if such is the state of things even when it comes to the fairly slow technological and economic changes undertaken by humans in recent decades, what sense does it make to project these numbers into an em-based economy that develops and changes at a speed hardly imaginable for us today, and whose production is largely aimed at creatures altogether different from us? Hardly any, I would say, which is why I don’t find the attempts to talk about long-term “real growth” as a well-defined number meaningful.
I don’t know what he thinks about how affordable biological human life would be in an em economy, but I’m pretty sure he doesn’t define his growth numbers tied to the current CPI basket. From the attitudes he typically displays in his writing, I would be surprised if he would treat things valued by ems and other AIs as essentially different from things valued by humans and unworthy of inclusion into the growth figures, even if humans find them irrelevant or even outright undesirable.
Thankyou, it’s a pleasure to chat with you, we should meet up in real life sometime!
I don’t think it’s a matter of whether you value your life but why. We don’t value life unconditionally (say, just a metabolism, or just having consciousness—both would be considered useless).
I wouldn’t expect anyone to choose to die, no, but I would predict some people would be depressed if everyone they cared about died and would not be too concerned about whether they lived or not. [I’ll add that the truth of this depends upon personality and generational age.]
Regarding the medieval peasant, I would expect her to accept the offer but I don’t think she would be irrational for refusing. In fact, if she refused, I would just decide she was a very incurious person and she couldn’t think of anything special to bring to the future (like her religion or a type of music she felt passionate about.) But I don’t think lacking curiosity or any goals for the far impersonal future is having low self-esteem. [Later, I’m adding that if she decided not to take the offer, I would fear she was doing so due to a transient lack of goals. I would rather she had made her decision when all was well.]
(If it was free, I definitely would take the offer and feel like I had a great bargain. I wonder if I can estimate how much I would pay for a cryopreservation that was certain to work? I think $10 to $50 thousand, in the case of no one I knew coming with me, but it’s difficult to estimate.)