A “do not resuscitate” kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.
In the worst futures, presumably those resuscitating you wouldn’t care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.
Edit: replies to this comment have changed my mind: I no longer believe the scenario(s) I illustrate below are absurd. That is, I no longer believe they’re so unlikely or nonsensical it’s not even worth acknowledging them. However, I don’t know what probability to assign to the possibility of such outcomes. Also, for all I know, it might make most sense to think the chances are still very low. I believe it’s worth considering them, but I’m not claiming it’s a big enough deal that nobody should sign up for cryonics.
In the worst futures, presumably those resuscitating you wouldn’t care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.
The whole point of this discussion is incredibly bad outcomes, however unlikely, may happen, so we wish to prepare for them. So, I understand why you point out this possibility. Still, that scenario seems very unlikely to me. Yudkowsky’s notion of Unfriendly AI is predicated on most possible minds the AI might have not caring about human values, so just using our particles to “make something else”. If the future turns into the sort of Malthusian trap Hanson predicts, it doesn’t seem the minds then would care about resuscitating us. I believe they would be indifferent, until the point they realized where our mind-brains are being stored is real estate to be used for their own processing power. Again, they obliterate our physical substrates without bothering to revive us.
I’m curious why or what minds would want to resuscitate us without caring about our wishes. Why put us through virtual torture, when if they needed minds to efficiently achieve a goal, they could presumably make new ones that won’t object to or suffer through whatever tribulations they must labor through?
Addendum: shminux reasons through it here, concluding it’s a non-issue. I understand your concern about possible future minds being made sentient, and forced into torturous labor. As much as that merits concern, it doesn’t explain why Omega would bother reviving us of all minds to do it.
I’m not saying its inevitable, but it’s a failure of imagination if you can’t think of any way the future can go horribly wrong like that.
My biggest concern is an AI or civilization that decides to create a real hell to punish people for their sins. Humans have pretty strong feelings towards wanting to punish those who did wrong, and our morality and views on punishment are constantly changing.
E.g. if slaveholder a were alive today, some people may want to see them tortured. In the future perhaps they will want to punish, hypothetically, meat eaters or people who weren’t as altruistic as possible, or something we can’t even conceive of.
Yeah, there are plenty of examples of dictators that go through great lengths to inflict tremendous amounts of pain on many people. It’s terrifying to think of someone like that in control of an AGI.
Granted, people like that probably tend to be less likely than the average head-of-state to find themselves in control of an AGI, since brutal dictators often have unhealthy economies, and are therefore unlikely to win an AGI race. But it’s not like they have a monopoly on revenge or psychopathy either.
I think sociopaths are about 4% of the population, so your scenario isn’t really that implausible. I just meant if all of societies’ values change over time. Or just the FAI extracting out “true” utility function which includes all the negative stuff, like desire for revenge.
Yeah, someone made another reply to my question to that effect. Yudkowsky and the MIRI emphasize how, in the space of all possible minds a general machine intelligence might develop, the space which contains all human-like minds is very small. So, originally, I was thinking the chances a machine mind would torture living humans would be conditional upon a prior mind (human or other) programming it that way, which itself dependent upon a machine being built which even recognizes human feelings as mattering at all. The chances of all that happening seemed vanishingly small to me.
However, I could be overestimating the likelihood Yudkowsky’s predictions are correct. For example, Robin Hanson believes the outcome could be much different, without superintelligence going ‘foom’, and instead being based upon human brain-emulations (HBEs). Based on related topics, I’ve assumed the Yudkowsky-Hanson AI Foom debate is over my head. So, I haven’t read it yet. However, others more knowledgeable than I apparently notice merit it Hanson’s position and criticisms, including Luke Muehlhauser when I asked him a couple years ago. While the MIRI may approach safety-engineering in a way that doesn’t discriminate too much between the nature of technological singularity, they still could be wrong about it being an intelligence explosion. I don’t claim nobody can tell which type of singularity is more likely. I merely mean I’m agnostic on the subject until I (can) examine it better.
Anyway, a singularity more like the one Hanson predicts make it seem more likely AGI will notice human values, and could hurt us. For example, HBEs could be controlled by hostile minds, which care about hurting us much more than an AGI born from an intelligence explosion. I’m not confident now the likelihood of such scenarios is high enough so myself and others shouldn’t sign up for cryonics. I myself am still undecided about cryonics, and skeptical of aspects of the procedure(s). However, I at first believed this outcome was absurd. Like, I thought the scenario so ludicrous or contrived so as not to even assign a probability to such outcomes. That was indeed a failure of my imagination. I don’t know what probability to assign now to outcomes where I or others wake up and suffer immense torture at the hands of a hostile future. However, I (no longer) believe it should be utterly neglected in future calculations of the value of ‘getting froze’, or whatever.
More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we’d like.
I think I did not explain my proposal clearly enough. What I’m claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as “too likely”, then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.
Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).
A “do not resuscitate” kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.
In the worst futures, presumably those resuscitating you wouldn’t care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.
Edit: replies to this comment have changed my mind: I no longer believe the scenario(s) I illustrate below are absurd. That is, I no longer believe they’re so unlikely or nonsensical it’s not even worth acknowledging them. However, I don’t know what probability to assign to the possibility of such outcomes. Also, for all I know, it might make most sense to think the chances are still very low. I believe it’s worth considering them, but I’m not claiming it’s a big enough deal that nobody should sign up for cryonics.
The whole point of this discussion is incredibly bad outcomes, however unlikely, may happen, so we wish to prepare for them. So, I understand why you point out this possibility. Still, that scenario seems very unlikely to me. Yudkowsky’s notion of Unfriendly AI is predicated on most possible minds the AI might have not caring about human values, so just using our particles to “make something else”. If the future turns into the sort of Malthusian trap Hanson predicts, it doesn’t seem the minds then would care about resuscitating us. I believe they would be indifferent, until the point they realized where our mind-brains are being stored is real estate to be used for their own processing power. Again, they obliterate our physical substrates without bothering to revive us.
I’m curious why or what minds would want to resuscitate us without caring about our wishes. Why put us through virtual torture, when if they needed minds to efficiently achieve a goal, they could presumably make new ones that won’t object to or suffer through whatever tribulations they must labor through?
Addendum: shminux reasons through it here, concluding it’s a non-issue. I understand your concern about possible future minds being made sentient, and forced into torturous labor. As much as that merits concern, it doesn’t explain why Omega would bother reviving us of all minds to do it.
I’m not saying its inevitable, but it’s a failure of imagination if you can’t think of any way the future can go horribly wrong like that.
My biggest concern is an AI or civilization that decides to create a real hell to punish people for their sins. Humans have pretty strong feelings towards wanting to punish those who did wrong, and our morality and views on punishment are constantly changing.
E.g. if slaveholder a were alive today, some people may want to see them tortured. In the future perhaps they will want to punish, hypothetically, meat eaters or people who weren’t as altruistic as possible, or something we can’t even conceive of.
Yeah, there are plenty of examples of dictators that go through great lengths to inflict tremendous amounts of pain on many people. It’s terrifying to think of someone like that in control of an AGI.
Granted, people like that probably tend to be less likely than the average head-of-state to find themselves in control of an AGI, since brutal dictators often have unhealthy economies, and are therefore unlikely to win an AGI race. But it’s not like they have a monopoly on revenge or psychopathy either.
I think sociopaths are about 4% of the population, so your scenario isn’t really that implausible. I just meant if all of societies’ values change over time. Or just the FAI extracting out “true” utility function which includes all the negative stuff, like desire for revenge.
Yeah, someone made another reply to my question to that effect. Yudkowsky and the MIRI emphasize how, in the space of all possible minds a general machine intelligence might develop, the space which contains all human-like minds is very small. So, originally, I was thinking the chances a machine mind would torture living humans would be conditional upon a prior mind (human or other) programming it that way, which itself dependent upon a machine being built which even recognizes human feelings as mattering at all. The chances of all that happening seemed vanishingly small to me.
However, I could be overestimating the likelihood Yudkowsky’s predictions are correct. For example, Robin Hanson believes the outcome could be much different, without superintelligence going ‘foom’, and instead being based upon human brain-emulations (HBEs). Based on related topics, I’ve assumed the Yudkowsky-Hanson AI Foom debate is over my head. So, I haven’t read it yet. However, others more knowledgeable than I apparently notice merit it Hanson’s position and criticisms, including Luke Muehlhauser when I asked him a couple years ago. While the MIRI may approach safety-engineering in a way that doesn’t discriminate too much between the nature of technological singularity, they still could be wrong about it being an intelligence explosion. I don’t claim nobody can tell which type of singularity is more likely. I merely mean I’m agnostic on the subject until I (can) examine it better.
Anyway, a singularity more like the one Hanson predicts make it seem more likely AGI will notice human values, and could hurt us. For example, HBEs could be controlled by hostile minds, which care about hurting us much more than an AGI born from an intelligence explosion. I’m not confident now the likelihood of such scenarios is high enough so myself and others shouldn’t sign up for cryonics. I myself am still undecided about cryonics, and skeptical of aspects of the procedure(s). However, I at first believed this outcome was absurd. Like, I thought the scenario so ludicrous or contrived so as not to even assign a probability to such outcomes. That was indeed a failure of my imagination. I don’t know what probability to assign now to outcomes where I or others wake up and suffer immense torture at the hands of a hostile future. However, I (no longer) believe it should be utterly neglected in future calculations of the value of ‘getting froze’, or whatever.
More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we’d like.
Experimental material for developing resuscitation technology. Someone has to be the first attempted revival.
I think I did not explain my proposal clearly enough. What I’m claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as “too likely”, then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.
Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).