What flavor of opposition do you anticipate? “selfish”, “won’t work/wasteful”, or “weird”? If it is the former, you might consider the tactic of signing up your daughter first.
(I have comments further down in this thread about the odds for cryonics and changes in my views over time.)
The opposition I got when I told my parents that there is such a thing was that they didn’t want to wake up as machines. I think they didn’t agree that they’ll be the same person.
Combine that with the uncertainty that you’ll be frozen, the uncertainty that you’ll wake up, the chance of Blue Gender, and of course, the cost, and it stops being such an obvious decision. Blue Gender is probably the biggest factor for me.
*Blue Gender is an anime about a kid who signed up for cryonics, and woke up while being evacuated from fugly giant insects.God forbid. But even you guys suggest that FAI has a 1% chance of success or so. Is it so great to die, be reborn, and die AGAIN?
What are the chances of you being revived without AGI? It’s possible, but probably less likely for a variety of reasons (without AGI, it’s harder to reach that technological level, and without AGI, it’s harder for humanity to survive long enough (because of existential risks) to get to that technological level in the first place, etc).
But that’s not all. If this AGI isn’t Certified Friendly, the chances of humanity surviving for very long after it starts recursively improving are also pretty slim.
So chances are, if you are woken up, it’ll be in a world with FAI. If things go really bad, you’d probably never find out...
Am I making up a just-so story here? Do others think this makes sense?
From what I know, the danger of UFAI isn’t that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn’t care about us and would want to use resources to achieve goals other than what humans would want (“all that energy and those atoms, I need them to make more computronium, sorry”).
I suppose it’s possible to invent many scenarios where such an evil AI would be possible, but it seems unlikely enough based on the information that I have now that I wouldn’t gamble a chance at life (versus a certain death) based on this sci-fi plot.
But if you are scared of UFAI, you can do something now by supporting FAI research. It might actually be more likely for us to face a UFAI within our current lives than after being woken up from cryonic preservation (since just the fact of being woken up is probably a positive sign of FAI).
From what I know, the danger of UFAI isn’t that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn’t care about us and would want to use resources to achieve goals other than what humans would want (“all that energy and those atoms, I need them to make more computronium, sorry”).
I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.
That was my understanding, but I think that any world in which there is an AGI that isn’t Friendly probably won’t be very stable. If that happens, I think there’s a lot more chances that humanity will be destroyed quickly and you won’t be woken up than that a stable but “worse than death” world will form and decide to wake you up.
But maybe I’m missing something that makes such “worse than death” worlds plausible.
I hope so. Most UFAI scenarios so far suggested, IIRC, end with everyone either dead or as mindless blobs of endless joy (which may or may not be the same thing, but I’d pick wireheading over death). But remember that the UFAI’s designers, stupid though they may be, will be unlikely to forget “thou shalt not kill featherless bipeds with straight nails”. So there’s a disturbing and non-negligible chance of waking up in Christian heaven.
Edit: So after all this, does cryonics still sound like a good idea? If yes, why? I really, really WANT there to be reasons to sign up. I want to see that world without traffic jams or copyright lawyers. But I’m just not convinced, and that’s depressing.
And given Lawrence’s 42 years of life after reverting the change why ever did he not work on getting another 0.199% right? In fact, what was Caroline thinking reverting the change before they had solid plan for post Prime-Intellect survival?
Fictional characters and their mortality fetishes. Pah.
The correct interpretation of the ending (based on the excerpt from the sequel posted and an interview localroger did with an Australian radio/podcast host) is that Caroline did not really revert the change; Prime Intellect remained in control of the universe.
“The Change” was keeping humans in a simulation of the universe (and turning the actual universe into computronium) instead of in the universe itself. So when it “reversed the Change” it was still as powerful as it was before the Change. What had happened was that Prime Intellect had been convinced that the post-Change world it created was not the best way of achieving its goals, so it set up a different universe. (I imagine that, as of chapter 8, Prime Intellect’s current plan for humanity is something like Buddhist-style reincarnation—after all, its highest priority is to prevent human deaths.)
I agree. Prime Intellect is absolutely friendly in that most important sense of caring about the continued existence and well-being of humans.
It was a good story, but I’m not sure that humans would have actually behaved as in that universe. Or we only saw a small subset of that universe. For example, we saw no one make themselves exponentially smarter. No one cloned themselves. No people merged consciousnesses. No one tried to convince Prime Intellect to reactivate the aliens inside of a zoo that allowed them to exist and for humanity to interact with them, without the danger of the aliens gaining control of Technology.
If I could choose between waiting around for Eliezer to make Friendly AI (or fail), I would choose the universe of Prime Intellect in a heartbeat. I don’t see why Fun Theory doesn’t apply there.
What flavor of opposition do you anticipate? “selfish”, “won’t work/wasteful”, or “weird”? If it is the former, you might consider the tactic of signing up your daughter first.
(I have comments further down in this thread about the odds for cryonics and changes in my views over time.)
The opposition I got when I told my parents that there is such a thing was that they didn’t want to wake up as machines. I think they didn’t agree that they’ll be the same person.
Combine that with the uncertainty that you’ll be frozen, the uncertainty that you’ll wake up, the chance of Blue Gender, and of course, the cost, and it stops being such an obvious decision. Blue Gender is probably the biggest factor for me.
*Blue Gender is an anime about a kid who signed up for cryonics, and woke up while being evacuated from fugly giant insects.God forbid. But even you guys suggest that FAI has a 1% chance of success or so. Is it so great to die, be reborn, and die AGAIN?
Be careful about evidence from fiction.
Let’s see...
What are the chances of you being revived without AGI? It’s possible, but probably less likely for a variety of reasons (without AGI, it’s harder to reach that technological level, and without AGI, it’s harder for humanity to survive long enough (because of existential risks) to get to that technological level in the first place, etc).
But that’s not all. If this AGI isn’t Certified Friendly, the chances of humanity surviving for very long after it starts recursively improving are also pretty slim.
So chances are, if you are woken up, it’ll be in a world with FAI. If things go really bad, you’d probably never find out...
Am I making up a just-so story here? Do others think this makes sense?
The possibility of being woken up by an UFAI might be regarded as a good reason to avoid cryonics.
From what I know, the danger of UFAI isn’t that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn’t care about us and would want to use resources to achieve goals other than what humans would want (“all that energy and those atoms, I need them to make more computronium, sorry”).
I suppose it’s possible to invent many scenarios where such an evil AI would be possible, but it seems unlikely enough based on the information that I have now that I wouldn’t gamble a chance at life (versus a certain death) based on this sci-fi plot.
But if you are scared of UFAI, you can do something now by supporting FAI research. It might actually be more likely for us to face a UFAI within our current lives than after being woken up from cryonic preservation (since just the fact of being woken up is probably a positive sign of FAI).
I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.
That was my understanding, but I think that any world in which there is an AGI that isn’t Friendly probably won’t be very stable. If that happens, I think there’s a lot more chances that humanity will be destroyed quickly and you won’t be woken up than that a stable but “worse than death” world will form and decide to wake you up.
But maybe I’m missing something that makes such “worse than death” worlds plausible.
I think you’re right. The main risk would be Friendly to Someone Else AI.
I hope so. Most UFAI scenarios so far suggested, IIRC, end with everyone either dead or as mindless blobs of endless joy (which may or may not be the same thing, but I’d pick wireheading over death). But remember that the UFAI’s designers, stupid though they may be, will be unlikely to forget “thou shalt not kill featherless bipeds with straight nails”. So there’s a disturbing and non-negligible chance of waking up in Christian heaven.
Edit: So after all this, does cryonics still sound like a good idea? If yes, why? I really, really WANT there to be reasons to sign up. I want to see that world without traffic jams or copyright lawyers. But I’m just not convinced, and that’s depressing.
Or in “The Metamorphosis of Prime Intellect”.
Prime Intellect was like this close to being Friendly.
Yep, you’ve got to get your AI like 99.8% right for it to go wrong that way.
And given Lawrence’s 42 years of life after reverting the change why ever did he not work on getting another 0.199% right? In fact, what was Caroline thinking reverting the change before they had solid plan for post Prime-Intellect survival?
Fictional characters and their mortality fetishes. Pah.
The correct interpretation of the ending (based on the excerpt from the sequel posted and an interview localroger did with an Australian radio/podcast host) is that Caroline did not really revert the change; Prime Intellect remained in control of the universe.
http://www.kuro5hin.org/prime-intellect/mopidnf.html
“The Change” was keeping humans in a simulation of the universe (and turning the actual universe into computronium) instead of in the universe itself. So when it “reversed the Change” it was still as powerful as it was before the Change. What had happened was that Prime Intellect had been convinced that the post-Change world it created was not the best way of achieving its goals, so it set up a different universe. (I imagine that, as of chapter 8, Prime Intellect’s current plan for humanity is something like Buddhist-style reincarnation—after all, its highest priority is to prevent human deaths.)
Actually, I’m more tempted to say that he was friendly, just not generally intelligent enough. Some of the humans seemed really silly, though…
I’ve no idea what extrapolated volition would mean in a population with that many freaks :-)
I agree. Prime Intellect is absolutely friendly in that most important sense of caring about the continued existence and well-being of humans.
It was a good story, but I’m not sure that humans would have actually behaved as in that universe. Or we only saw a small subset of that universe. For example, we saw no one make themselves exponentially smarter. No one cloned themselves. No people merged consciousnesses. No one tried to convince Prime Intellect to reactivate the aliens inside of a zoo that allowed them to exist and for humanity to interact with them, without the danger of the aliens gaining control of Technology.
If I could choose between waiting around for Eliezer to make Friendly AI (or fail), I would choose the universe of Prime Intellect in a heartbeat. I don’t see why Fun Theory doesn’t apply there.