“Some of you may be already be familiar with Tim Urban’s remarkable blog, Wait But Why. You might be among the 336,693 subscribers to Tim’s blog, or you might just have come across one of his stunning detailed and clever posts, such as on procrastination, the genius of Elon Musk, The AI Revolution, or Putting Time in Perspective.
A few days ago, Tim posted what is possibly the single best piece ever written on cryonics. Warning: It is long and, once you start reading it, you will find it hard to stop. Please use it to persuade your non-cryonicist friends and relatives! The blog post has already generated a surge in visits to Alcor.org and in people engaging Marji in online chat, and in serious requests for membership information packets. You can find it Wait But Why, Why Cryonics Makes Sense.”
I suppose the article does a good job answering some of the common objections, but I still think the most important thing that’s stopping people from signing up is the fact that they just don’t care: after all, life sucks, but at least then you die.
That said, there is one argument that I find kind of powerful that articles like this don’t usually touch on (for somewhat obvious reasons): the point made in, for example, the preface to the finale of the Ultimate Meta Mega Crossover, that if we actually live in an infinite multiverse/many-worlds/nested simulverse/etc, we may be bound to find ourselves resurrected by someone eventually anyways, and cryonics could be a way to try to make sure that someone is friendly.
I’m not really sure what to make of that argument though. I wonder if there’s anybody who’s signed up because of reasons like that, despite not having any interest in cryonics in general?
“could be a way to try to make sure that someone is friendly.”
I don’t believe in nested simulverse etc but I feel I should point out that even if some of those things were true waking up one way does not preclude waking up one or more of the other ways in addition to that.
but I feel I should point out that even if some of those things were true waking up one way does not preclude waking up one or more of the other ways in addition to that.
You’re right. I should have said “make it more likely”, not “make sure”.
Same reason I don’t believe in god. As yet we have ~zero evidence for being in a simulation.
You’re right. I should have said “make it more likely”, not “make sure”.
Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You’re just making it more likely that one fork of yourself might wake up in friendly hands.
As yet we have ~zero evidence for being in a simulation.
We have evidence (albeit no “smoking-gun evidence”) for eternal inflation, we have evidence for a flat and thus infinite universe, string theory is right now our best guess at what the theory of everything is like; these all predict a multiverse where everything possible happens and where somebody should thus be expected to simulate you.
Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You’re just making it more likely that one fork of yourself might wake up in friendly hands.
Well, I think that qualifies. Our language is a bit inadequate for discussing situations with multiple future selves.
I find that about as convincing as “if you see a watch there must be a watchmaker” style arguments.
I don’t see the similarity here.
There are a number of ways theorized to test if we’re in various kinds of simulation and so far they’ve all turned up negative.
Oh?
String theory is famously bad at being usable to predict even mundane things even if it is elegant and “flat” is not the same as “infinite”.
It basically makes no new testable predictions right now. Doesn’t mean that it won’t do so in the future. (I have no opinion about string theory myself, but a lot of physicists do see it as promising. Some don’t. As far as I know, we currently know of no good alternative that’s less weird.)
By “the preface” do you mean the “memetic hazard warnings”?
Concepts contained in this story may cause SAN Checking in any mind not inherently stable at the third level of stress. Story may cause extreme existential confusion. Story is insane. The author recommends that anyone reading this story sign up with Alcor or the Cryonics Institute to have their brain preserved after death for later revival under controlled conditions. Readers not already familiar with this author should be warned that he is not bluffing.
I don’t think that is claiming that it is a rational response to claims about the word.
we may be bound to find ourselves resurrected by someone eventually anyways, and cryonics could be a way to try to make sure that someone is friendly.
This is a quantum immortality argument. If you actually believe in quantum immortality, you have bigger problems. Here is Eliezer offering cryonics as a solution to those, too.
By “the preface” do you mean the “memetic hazard warnings”?
Yes.
I don’t think that is claiming that it is a rational response to claims about the word.
I don’t get this. I see a very straightforward claim that cryonics is a rational response. What do you mean?
This is a quantum immortality argument. If you actually believe in quantum immortality, you have bigger problems. Here is Eliezer offering cryonics as a solution to those, too.
I’ve read that as well. It’s the same argument, essentially (quantum immortality doesn’t actually have much to do with MWI in particular). Basically, Eliezer is saying that quantum immortality is probably true, it could be very bad, and we should sign up for cryonics as a precaution.
Why would someone make major decisions based on metaphysical interpretations of quantum physics that are lacking experimental verifiability? That seems like poor life choices.
Tegmark 4 is not related to quantum physics. Quantum physics does not give an avenue for rescue simulations; in fact, it makes them harder.
As a simulationist, you can somewhat salvage traditional notions of fear if you retreat into a full-on absurdist framework where the point of your existence is to give a good showing to the simulating universes; alternately, risk avoidance is a good Schelling point for a high score. Furthermore, no matter how much utility you will be able to attain in Simulationist Heaven, this is your single shot to attain utility on Earth, and you shouldn’t waste it.
It does take the sting off death though, and may well be maladaptive in that sense. That said—it seems plausible a lot of simulating universes would end up with a “don’t rescue suicides” policy, purely out of a TDT desire to avoid the infinite-suicidal-regress loop.
I am continuously amused how catholic this cosmology ends up by sheer logic.
you can somewhat salvage traditional notions of fear … Simulationist Heaven … It does take the sting off death though
I find the often prevalent optimism on LW regarding this a bit strange. Frankly, I find this resurrection stuff quite terrifying myself.
I am continuously amused how catholic this cosmology ends up by sheer logic.
Yeah. It does make me wonder if we should take a lot more critical stance towards the premises that lead us to it. Sure enough, the universe is under no obligation to make any sense to us; but isn’t it still a bit suspicious that it’s turning out to be kind of bat-shit insane?
Perhaps you shouldn’t. That said, it is recommended by Eliezer Yudkowsky, and his words often weigh quite heavily here.
I don’t necessarily agree that lacking experimental verifiability means that we shouldn’t take something into account when making decisions, if we have enough reasons to think that it’s true nevertheless.
This article made me think about how, since the really the beginning of time each generation of people has come up with some completely rational (at the time) argument of why they will get into the afterlife/heaven.
The Egyptians tried really hard to preserve the body using the best science they had available at the time—it kinda worked because they never suffered from the ‘second death’ (but I guess just not in the way they hoped).
And that thinking forced me to have a good long think about whether cryonics is just the same as previous beliefs or if it’s something new. I did a video on it and (no I don’t want to spam you guys), but this forum is the kinda people I really want to engage with—people who can both agree and question something at the same time. So here’s my take on what Cryonics actually is: https://www.youtube.com/watch?v=etRz6qjVXs0
I find this interesting because Tim doesn’t seem to come from the LW-sphere, but still clicked on arguments that I typically associate with LW-type people. That may say more about what I’m exposed to than anything else, though.
He doesn’t come from the LW-sphere but he’s obviously read a lot of LW or LW-affiliated stuff. I mean, he’s written a pair of articles about the existential risk of AGI...
Most cryonicists have a hunch that you can survive cryopreservation intact (cryonicist Eliezer Yudkowsky argues that “successful cryonics preserves anything about you that is preserved by going to sleep at night and waking up the next morning”) but they also admit that this is yet another variable they’re not sure about. You might even want to consider this a fifth “If” to add onto our list: If what seems to be a revived me is actually me…
and also mentions at the end that he noticed Eliezer’s writings commonly turn out to have something in common with what he wants to write about.
I’m going to stick to possibly, though after a moment’s mental grappling I realized that if I answer ‘yes’ to your question I’m acknowledging its truth, while if I answer ‘no’ to your question I’m demonstrating its truth....if that was intentional, well played. :-P
Great article. I wonder if it will increase cryonics memberships?
I expect it will
Apparently you were right.
From Alcor:
“Some of you may be already be familiar with Tim Urban’s remarkable blog, Wait But Why. You might be among the 336,693 subscribers to Tim’s blog, or you might just have come across one of his stunning detailed and clever posts, such as on procrastination, the genius of Elon Musk, The AI Revolution, or Putting Time in Perspective.
A few days ago, Tim posted what is possibly the single best piece ever written on cryonics. Warning: It is long and, once you start reading it, you will find it hard to stop. Please use it to persuade your non-cryonicist friends and relatives! The blog post has already generated a surge in visits to Alcor.org and in people engaging Marji in online chat, and in serious requests for membership information packets. You can find it Wait But Why, Why Cryonics Makes Sense.”
I was wondering about this and emailed Alcor’s leader, Mr. Max More. Apparently this article responsible for over 25 memberships alone.
Me too. I feel like more people will sign up after that article.
I suppose the article does a good job answering some of the common objections, but I still think the most important thing that’s stopping people from signing up is the fact that they just don’t care: after all, life sucks, but at least then you die.
That said, there is one argument that I find kind of powerful that articles like this don’t usually touch on (for somewhat obvious reasons): the point made in, for example, the preface to the finale of the Ultimate Meta Mega Crossover, that if we actually live in an infinite multiverse/many-worlds/nested simulverse/etc, we may be bound to find ourselves resurrected by someone eventually anyways, and cryonics could be a way to try to make sure that someone is friendly.
I’m not really sure what to make of that argument though. I wonder if there’s anybody who’s signed up because of reasons like that, despite not having any interest in cryonics in general?
I don’t believe in nested simulverse etc but I feel I should point out that even if some of those things were true waking up one way does not preclude waking up one or more of the other ways in addition to that.
You mean none of what I mentioned? Why not?
You’re right. I should have said “make it more likely”, not “make sure”.
Same reason I don’t believe in god. As yet we have ~zero evidence for being in a simulation.
Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You’re just making it more likely that one fork of yourself might wake up in friendly hands.
We have evidence (albeit no “smoking-gun evidence”) for eternal inflation, we have evidence for a flat and thus infinite universe, string theory is right now our best guess at what the theory of everything is like; these all predict a multiverse where everything possible happens and where somebody should thus be expected to simulate you.
Well, I think that qualifies. Our language is a bit inadequate for discussing situations with multiple future selves.
I find that about as convincing as “if you see a watch there must be a watchmaker” style arguments.
There are a number of ways theorized to test if we’re in various kinds of simulation and so far they’ve all turned up negative.
String theory is famously bad at being usable to predict even mundane things even if it is elegant and “flat” is not the same as “infinite”.
I don’t see the similarity here.
Oh?
It basically makes no new testable predictions right now. Doesn’t mean that it won’t do so in the future. (I have no opinion about string theory myself, but a lot of physicists do see it as promising. Some don’t. As far as I know, we currently know of no good alternative that’s less weird.)
By “the preface” do you mean the “memetic hazard warnings”?
I don’t think that is claiming that it is a rational response to claims about the word.
This is a quantum immortality argument. If you actually believe in quantum immortality, you have bigger problems. Here is Eliezer offering cryonics as a solution to those, too.
Yes.
I don’t get this. I see a very straightforward claim that cryonics is a rational response. What do you mean?
I’ve read that as well. It’s the same argument, essentially (quantum immortality doesn’t actually have much to do with MWI in particular). Basically, Eliezer is saying that quantum immortality is probably true, it could be very bad, and we should sign up for cryonics as a precaution.
Why would someone make major decisions based on metaphysical interpretations of quantum physics that are lacking experimental verifiability? That seems like poor life choices.
Tegmark 4 is not related to quantum physics. Quantum physics does not give an avenue for rescue simulations; in fact, it makes them harder.
As a simulationist, you can somewhat salvage traditional notions of fear if you retreat into a full-on absurdist framework where the point of your existence is to give a good showing to the simulating universes; alternately, risk avoidance is a good Schelling point for a high score. Furthermore, no matter how much utility you will be able to attain in Simulationist Heaven, this is your single shot to attain utility on Earth, and you shouldn’t waste it.
It does take the sting off death though, and may well be maladaptive in that sense. That said—it seems plausible a lot of simulating universes would end up with a “don’t rescue suicides” policy, purely out of a TDT desire to avoid the infinite-suicidal-regress loop.
I am continuously amused how catholic this cosmology ends up by sheer logic.
I find the often prevalent optimism on LW regarding this a bit strange. Frankly, I find this resurrection stuff quite terrifying myself.
Yeah. It does make me wonder if we should take a lot more critical stance towards the premises that lead us to it. Sure enough, the universe is under no obligation to make any sense to us; but isn’t it still a bit suspicious that it’s turning out to be kind of bat-shit insane?
As opposed to the usual “I’ve had a few beers and it seemed like a good idea at the time”..? X-)
Perhaps you shouldn’t. That said, it is recommended by Eliezer Yudkowsky, and his words often weigh quite heavily here.
I don’t necessarily agree that lacking experimental verifiability means that we shouldn’t take something into account when making decisions, if we have enough reasons to think that it’s true nevertheless.
Arguments from authority are equally ill advised :)
By the way you will find that Mr. Yudkowsky’s positions are not held by everyone here.
Of course not. But whether people here agree with him or not, they usually at least think that his arguments need to be considered seriously.
This article made me think about how, since the really the beginning of time each generation of people has come up with some completely rational (at the time) argument of why they will get into the afterlife/heaven.
The Egyptians tried really hard to preserve the body using the best science they had available at the time—it kinda worked because they never suffered from the ‘second death’ (but I guess just not in the way they hoped).
And that thinking forced me to have a good long think about whether cryonics is just the same as previous beliefs or if it’s something new. I did a video on it and (no I don’t want to spam you guys), but this forum is the kinda people I really want to engage with—people who can both agree and question something at the same time. So here’s my take on what Cryonics actually is: https://www.youtube.com/watch?v=etRz6qjVXs0
Heh. Damn, beat me to it.
I find this interesting because Tim doesn’t seem to come from the LW-sphere, but still clicked on arguments that I typically associate with LW-type people. That may say more about what I’m exposed to than anything else, though.
He doesn’t come from the LW-sphere but he’s obviously read a lot of LW or LW-affiliated stuff. I mean, he’s written a pair of articles about the existential risk of AGI...
He explicitly quotes Eliezer, e.g.:
and also mentions at the end that he noticed Eliezer’s writings commonly turn out to have something in common with what he wants to write about.
I think maybe we just interpreted it differently. That reads to me like someone on the outside coming in, not someone on the inside going out.
My interpretation was the same as yours, and I never said anything that contradicts it. I just provided some relevant information.
You might have exhibited a tendency to assume that all arguments attack your position by default?
Possibly.
I’m going to stick to possibly, though after a moment’s mental grappling I realized that if I answer ‘yes’ to your question I’m acknowledging its truth, while if I answer ‘no’ to your question I’m demonstrating its truth....if that was intentional, well played. :-P
Herherher. This is why I always thought I would fit in the role of an evil mastermind.