byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.