The Extended Living-Forever Strategy-Space
I wanted to try and write this like a sequence post with a little story at the beginning because the style is hard to beat if you can pull it off. For those that want to skip to the meat of the argument, scroll down to the section titled ‘The Jealous God of Cryonics’
Bizzaro-Pascal
The year is 1600BC and Moses is scrambling down the slopes of Mount Sinai under the blazing Egyptian sun, with two stone tablets tucked under his arms—strangely small for the enduring impact they will have on the world. Pausing a moment to take a sip of water from his waterskin, he decided to double-check the words on the tablet were the same as those God had dictated to him before reading them to the Israelites – it wouldn’t do to have a typo encouraging adultery! Suddenly, a great shockwave bowled Moses onto the ground. It was simultaneously as loud as the universe tearing itself into two nearly identical copies, but as quiet as the difference between a coin landing on heads rather than tails. Moses—trembling with shock—picked himself up, dusted off the tablets and scratched his beard. He was sure that the Second Commandment looked a bit different, but he couldn’t quite put his finger on it...
More than three thousand years later, Blaise Pascal is about to formulate the Wager that would make him infamous. “You see,” he says, “If God exists then the payoff is infinitely positive for believing in Him and infinitely negative for not, therefore whatever the cost of believing you should do it”.
“Well I’m sceptical,” says his friend, “It seems to me that the idea of an infinite payoff is incoherent to begin with, plus you have no particular reason to privilege the hypothesis that the Christian God should and wants to be worshipped, and not to mention the fact that if I were God I’d be pretty irritated that people pretended believe in Me because of some probabilistic argument rather than by observing all of My great works”
“But don’t you see?” Pascal rejoins, “God in His infinite goodness foresaw your objections and wrote the Second Commandment specifically to take that into account; ‘Thou shall have no other God but me, unless thou feels that thy can maximise thine’s utility by ignoring this Commandment and worshipping multiple Gods. Seriously, I don’t mind, worship as many Gods as you want with whatever degree of ‘true’ faithfulness versus rational utility maximising makes you happiest (although I recommend worshipping only Gods that do not prohibit the worship of other Gods, so as to maximise your chances of getting it right and going to heaven)’ “
“Hmm… Yes, come to think of it there has always been something a little different about that Commandment compared to the rest. I didn’t think much of it because there exist similar laws in every other major religion, which now I reflect on it should probably have tipped me off to the format of your Wager quite a long time before now”
“You see my Wager suggests you should worship the largest subset of non-contradictory Gods you possibly can; although I acknowledge that the probability of selecting the true God out of all of God-space is small (and for that God to both exist and select for heaven based on faithfulness is also unlikely), the payoff is sufficiently wonderful to make it worth the small up-front cost of most religions’ declarations of faith. I can only imagine what sort of a fanatic would seriously propose this argument in a Universe where all Gods demand you sample only once from God-space!”
In the universe Pascal describes, all you need to do to qualify for eternal life given a particular religion is true is to say out loud that you are a true believer (or go through some non-traumatic initiation rite, like a baptism or Shahadah). The probability of a God existing is still low, and the probability of that God caring that you worship Zir is still low, but it is (almost certainly) rational to take the advice of Pascal and find a maximal subset of Gods that you think maximises your chance of eternal happiness.
The Jealous God of Cryonics
Cryonics is not like Pascal’s Wager except superficially, but this little story attempts to drive an intuition which would appeal to bizzaro-Pascal. In this universe, someone who worshipped only one God would be deeply irrational. They might be able to defend their choice with some applause-light soundbites (“I have great faith, so I need no other Gods”) but in a purely utility-maximising sense—the sense where we try and maximise the number of happy years we live – this person is behaving irrationally. But although this seems obvious to us, some (most) cryonics advocates behave as though cryonics is a ‘jealous’ God (like in our universe) rather more accurately modelling it as a ‘permissive’ God like in bizzaro-Pascal’s universe. Cryonics doesn’t care at all if you adopt other strategies for maximising your lifespan except insofar as they conflict with cryonics. So for example high religiosity and cryonics are logically compatible as far as I can see; if brain death really is death (that is to say it is completely irreversible) then at least you have the back-up possibility that an afterlife exists. Yet it seems to me that supporters of cryonics happily stop looking for alternate life-extension strategies almost as soon as they discover cryonics (I hypothesise the actual mechanism is that someone convinces them cryonics is rational and then they forget about the rest of the strategy-space in their excitement). Certainly, I can’t find any discussions on cryonics on LessWrong promoting any alternate life-maximisation techniques except perhaps brain plasticisation. This is a shame, because it is possible that some additional life-extension techniques might be costlessly employed by those who want to live forever to greatly increase their expected utility.
Looking around for literature on this topic. Alcor, for example, have an article entitled ‘The Road Less Travelled’ talking about potential alternatives to cryonics including desiccation and peat preservation. Brain plasticisation and chemical preservation are seriously discussed as alternatives even amongst those who are strongly in favour of freezing; the consensus is that these techniques are likely to offer a higher success rate once they are perfected, but freezing is the way forward now. I can think of a few more outlandish methods of preservation (such as firing yourself into the heart of a black hole and assuming time dilation means you will still be alive when a recovery technique is developed or standing in a high-radiation environment hoping that your telomerase will re-knit) but these all suffer from the fact they are less likely to work than cryonics, and obviously so. Why would cryonicists waste time thinking about outlandish preservation techniques when they displace a more likely technique? Indeed, even if these techniques were more likely there are good reasons to treat cryonics as a Schelling point unless a new technique obviously dominates; we want future society to spend all of its resources targeting one problem, especially if we are part of the generation that is first experimenting with these techniques. While it surprises me that no cryonicists seem interested in this even as an intellectual exercise, it is at least rational to ignore low-probability techniques which displace higher-probability techniques with the same payoff for all of the above reasons.
The Extended Strategy-Space
But there seems to be no excuse for failing to consider additional strategies which complement cryonics; there exist a very great number of strategies which could be followed that might result in revivification before cryonics (or instead of cryonics if cryonics turns out to be impossible) and have a cost of strictly less than cryonic freezing. I’ve given them short descriptions to enable easy reference in the comments (if anyone is interested) so don’t read too much into the names. I’ve also ordered them roughly in the order in which I find them plausible; up until the boundary between Social and Simulation Preservation I actual find the arguments more plausible than cryonics:
Diarist Preservation: Begin recording your phone calls, pay someone to archive your web presence, begin keeping obsessive diaries and blog constantly. Hope that this can be recompiled into a coherent personality at some point in the future, or at the very least be used to plug gaps in the personality of the unfrozen body.
Genetic Preservation: Take genetic samples of yourself and preserve them in a platinum-iridium bar in binary. Hope that personality is very largely genetic, and the proportion that isn’t can be reconstructed from statistical analysis of the time period in which you live (perhaps by employing Diarist Preservation in tandem).
MRI Preservation: Subject yourself to MRI scans as often as possible (it may be helpful to fake a serious neurological condition). Ask for copies and encode them in microchips that you scatter round the world as you travel. Hope that future societies will find the information useful to constructing an em and will find the chips if they are distributed widely enough.
Signal Preservation: Obsessively generate long streams of nonsense binary based on tapping randomly at a keyboard. Assume that these long strings must correspond in some way to brain states, and that future mathematics will be advanced enough to untangle the signal from the noise. Post these long strings of text to as many internet sites as possible to preserve them (VERY VERY IMPORTANT NOTE: If you decide to try this strategy you must absolutely ensure that the first few characters of every message are a code known only to you salted with (for example) the current time and then hashed, or the first word of the next string of binary you produce. Otherwise unkind people could claim to be you, post their own strings and screw up your revival. I don’t think it is a serious worry that people who can bring you back from the dead will struggle with SHA)
Social Preservation: Form a hypothesis which says (roughly) “The more people who know about me that I can persuade to freeze their brain information with me, the more likely it is that any gap in my own brain-state can be plugged with information from another individual’s brainstate”. Act ruthlessly on this hypothesis; pay for friends and family to get frozen conditional on their memorising a list of facts about you. Offer to discount a friend’s cryo in exchange for them signing up with another organisation to you (in case yours has a damaging but not fatal mishap and you need perfectly-stored redundant information to back yourself up). Attend cryonics conferences like a vulture, and socialise as much as you possibly can. An additional note about this strategy (which every pro-cryonicist knows); it is hugely in your interest to take a large 21st Century contingent with you to whatever time you are revived, so that your 21st Century contingent can form a natural political bloc. Even better if the majority of that bloc know and like you!
Simulation Preservation: Bury ‘time-capsules’ – lead-lined containers which explain in as many languages as possible who you are and expressing a desire to be resurrected if society has discovered that we live in a simulation and has the power to talk to the simulators. Otherwise ask the society to rebury your letter (after translating the request into all current languages) to await the arrival of a true simulationist society. A stronger version of this is to employ one of the aforementioned Preservation techniques and add in your letter that you would be happy to be resurrected inside a simulation created by this society based on the information preserved by that technique; that insures against the possibility that simulation is logically possible but we have not yet discovered a way to communicate with the simulator.
Philosophical Preservation: Discover a completely watertight argument which proves – perhaps probabilistically—that ‘you’ (the bit of you you hope will survive death) is totally identifiable with something permanent like the information on your Y-chromosome (for men) or the unordered atoms in your brain. Do whatever this argument implies to extend your life. This might sound silly, but many people really do profess to believe their ‘soul’ survives forever and they can increase their chances of this occurring by correctly interpreting a very old book, so it is highly likely that there is an argument that would convince you, even if that argument is not actually valid. A clever rationalist might even be able to identify a subset of religious/philosophical activities that maximises their chance of eternal life in heaven (as per the introductory story).
Evolutionary Preservation: Blast genetic samples of yourself into space. Hope that eons later one sample will come to rest on a planet suitable for life and evolve into a creature identical to you except whereas you have mostly true beliefs, this creature will have mostly delusional beliefs that correspond in a one-to-one way with your true beliefs. For example while you truthfully think, “I was alive in 2010”, this creature will have a delusional belief, “I was alive in 2010”, plus whatever additional delusional beliefs it needs to make this belief cohere, for example, “I must have been stunned sometime in early 2070 (when my beliefs appear to stop) and taken to this strange planet I don’t recognise”.
Time-travel Preservation: Do something so marvellous or heinous that if time travel exists, some time travellers will travel to the moment of your triumph/crime to watch. Overpower a time traveller, and take their time machine. You might have a very low prior probability of being able to do something so brilliant/evil as to compete with the whole rest of history, but bear in mind the first successful hijack of a time machine would itself be an event worth watching by future time travellers, so you may not actually need to do anything marvellous in the first place; just make a binding resolution with yourself to steal the first time machine you come across and look to see if any police phone boxes pop up from nowhere. Making this resolution once or twice a day for the rest of your life is almost costless, although perhaps you would want to attend a combat sport class to increase the chances of a successful overpowering.
Each of these strategies have a number of features which make them attractive; they are (mostly) less expensive than cryonics, they do not strictly lower your chance of cryogenic revival (and in some cases probably increase it) and all have a non-zero chance of preserving your brainstates at least until future society is advanced enough to do something with them. Even better, most of these strategies synergise well with each other; if I decide to get myself frozen I will definitely also pay for fMRIs to record my brainstate as I think about various stimuli and store copies of those recordings with multiple institutions. I don’t think this list is exhaustive, but I do think it covers a good amount of the possible ‘live forever’ strategy space. It does not explore strategies which are absurdly expensive or which interfere with cryonics—so it is still only one small corner of the total strategy space – but I think it expands the area of the strategy space most people are interested in; the bit in which you and I can act.
Since most of these would, if successful, result in an imperfect copy of yourself, rather than extending your own consciousness, you could include “have children.” If you really want a perfect copy, rather than a genome enriched by a partner, then human cloning is closer to feasible than cryopreservation of adults. Cryopreservation of embryos actually works. I wonder if there would be a market for a service that promises to keep embryos frozen until life human expectancy reaches 110, say, then bring the embryo to life by whatever methods they are using then, sharing some of the trust fund with the foster parents.
If your main requirement for cryonics is identity continuity, then this does not qualify. Children rarely think of themselves as identical to their parents. Maybe if there was a way to transfer parent’s memory, feelings and experiences to their children. But this appears to be as hard a problem as uploading.
It is non-trivial and subject to definitional quibbles (which are in all likelihood subjective in the first place) what “identity continuity” translates to. To what extent is “thinking themselves as identical” a consideration? You could program some barebone consciousness to think of itself as most any person, regardless of other memory engram congruities. One could upload your sleeping body and fork off myriad copies, all of which would share a continuous identity with you, in some sense. The very act of uploading itself, while taken as identity preserving around here (and imo rightly so) is a Ship of Theseus implementation, with no single “right” solution.
We, of course, take the engineering approach. But that only gives us “if it does the same thing (functionally indistinguishable, same blackbox behavior, passed a variant of the Turing test (which is just functional indistinguishability tested under resource constraints)) then meh, it’s probably a good enough fit for our purposes”. That approach doesn’t provide us with any error thresholds for what’s “close-enough” and what’s not, for which criteria are necessary, let alone sufficient. Back to the murky waters of the swamp-of-deceptively-clear-sounding-natural-language-definitions.
I don’t think it’s far fetched that “children are identity continuous to parents” would be included/derivable under many “sensible” thresholds.
My minimal definition of identity continuity is instrumental, individual and bidirectional: both the identity donor and the identity recipient must agree that they have the same identity as the entity on the other end. This definition avoids the argument of the type “I don’t think that if all my atoms are replaced, I will be the same person” and “If I take a general anaesthetic which slows down my mental processes to a halt, then I’m dead, and whoever wakes up from it is not me”. I have no problem agreeing with people like that that cryo suspension would not preserve their identity, as long as they don’t insist that it would not preserve mine. Moreover, for some people this definition is much more relaxed than mine, they might count a few photographs and fond memories as a partial identity preservation:
...As long as people don’t insist on it being a universal threshold, I don’t mind. If you agree that your identity is carried on by your children and they also agree that it is, good for you. Wouldn’t work for me, but then I’m not them.
Do you actually use this definition for moral purposes‽ If so, it would seem that psyhcotropics drugs / brainwashing would be a much easier method of “identity preservation” than all this messy business of freezing people or storing full brain states or whatever. Indeed, you could live indefinitely with nothing but a colony ship of gullible uninquisitive people who have been taught really weird philosophy, and some self-inflicted brain damage. Am I misunderstanding you?
Edit: Or more concretely, just start a brainwash-y cult with some “We are all one consciousness” woowoo and then immerse yourself in it / smash yourself in the head with a brick until you believe your own doctrine. Heck, I’d be surprised if this hasn’t been done already. Those New Age gurus really are extending their lifespans!
I see no such pattern. Among people I’ve met, there’s a high correlation between support for cryonics and practicing calorie restriction, and a moderately high correlation with attending life extension conferences.
The few people I can think of who may be using cryonics as a reason for losing interest in alternatives are the ones who think cryonics has a much greater than 50% chance of working.
The consensus I see is more like “nobody knows whether they’ll work”.
I agree with you to a large extent. It’s plausible that low-tech life extension strategies could have a major impact given rapidly advancing technology. So that the extra few years you buy by eating your vegetables and exercising regularly could easily turn out to be critical.
By contrast, many of the ideas in the original post are not all that plausible. For example I have read that the resolution of a standard MRI is about 1mm. It seems very unlikely that this would be sufficient resolution to accomplish anything worthwhile in terms of recreating a person’s mind. And even if it did, you will still run into the “matter transporter problem.”
I do think that the original poster has a bit of a point, i.e. that people tend to run with the pack in terms of their life extension decisions.
I believe that Eliezer is pro-cryonics (and pro-FAI work) not on the basis of the Pascal’s wager-type logic (high utilities overwhelming low probabilities), but because in his estimate the probability of cryonics working (and AI FOOMing) is significantly above the noise level.
Personally, I don’t find any of the strategies you mention to be plausible enough to be worth thinking about for more than a few seconds. (Most of them seem obviously insufficient to preserve anything I would identify as “me.”) I’m worried this may produce the opposite of this post’s intended effect, because it may seem to provide evidence that strategies besides cryonics can be easily dismissed.
I think the plausibility of the arguments depends in a very great part on how plausible you think cryonics is; since the average on this site is about 22%, I can see how other strategies which are low likelihood/high payoff might appear almost not worth considering. On the other hand, something like ‘simulationist’ preservation seems to me to be well within two orders of magnitude of the probability of cryonics—both rely on society finding your information and deciding to do something with it, and both rely on the invention of technology which appears logically possible but well outside the realms of current science (overcome death vs overcome computational limits on simulations). But simulation preservation is three orders of magnitude cheaper than cryonics, which suggests to me that it might be worthwhile to consider. That is to say, if you seriously dismissed it in a couple of seconds you must have very very strong reasons to think the strategy is—say—about four orders of magnitude less likely than cryonics. What reason is that? I wonder if maybe I assumed the simulation problem was more widely accepted than I thought it might be. I’m a bit concerned about this line of reasoning, because all of my friends dismiss cryonics as ‘obviously not worth considering’ and I think they adopt this argument because the probabilistic conclusions are uncomfortable to contemplate.
With respect to your second point, that this post could be counter-productive, I am hugely interested by the conclusion. A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics) and that both of those conclusions would be anathemic to the other group. If the ‘plausible strategy-space’ is not large I would take that as evidence that the strategy-space is in fact zero and people are just good at aggregating around plausible-but-flawed strategies. Can you think about any other major human accomplishment for which the strategy-space is so small? I suspect the conclusion for this is that I am bad at thinking up alternate strategies, rather than the strategies not existing, but it is an excellent point you make and well worth considering
I don’t know if I agree with your estimate of the relative probabilities, but I admit that I exaggerated slightly to make my point. I agree that this strategy at least worth thinking about, especially if you think it is at all plausible that we are in a simulation. Something along these lines is the only one of the listed strategies that I thought had any merit.
I agree, and I also think we should try to think up other strategies. Here are some that people have already come up with besides cryonics and religion:
Figure out how to cure aging before you die.
Figure out how to upload brains before you die.
Create a powerful AI and delegate the problem to it (complementary to cryonics if the AI will only be created after you die).
This is an excellent comment, and it is extremely embarrassing for me that in a post on the plausible ‘live forever’ strategy space I missed three extremely plausible strategies for living forever, all of which are approximately complementary to cryonics (unless they’re successful, in which case; why would you bother). I’d like to take this as evidence that many eyes on the ‘live forever’ problem genuinely does result in utility increase, but I think it is a more plausible explanation that I’m not very good at visualising the strategy space!
I thought this was a wonderful post. Funny, made a bunch of lw-relevant points, and was informative in a summatory way on a particular topic. The story wasn’t that great, but I guess the post as a whole worked, so maybe I’m not giving it enough credit as an important set-up piece.
It’s a shame, but none of the extended strategies appeal to me very much, including for many reasons other than how dubiously viable they sounded. Then again, I’m not yet signed up for cryonics either, so that suggests I’m not as into revival strategies so much as a whole. I did take issue with the suggestion that people signed up for cryonics don’t pursue other life extension strategies, but since the topic seems centred on revival-based strats then if we assume that particular comment is made in that context, then I think it’s still a somewhat valid point. In any case, the people trying to meet “actuarial escape velocities” and the like would balk at the initial formulation’s suggestion. There are plenty of people trying plenty of different things in the attempt to live forever, and cryonics is often but one of many of the cards in their hand.
“Less likely” is putting it mildly, even if you have a black hole handy. Anything thrown into it disappears from existence in its own frame of reference within milliseconds, so the odds of recovery are worse than for a simple burial.
Incidentally, immortality through time travel was apparently the main driver for Godel to discover his time-traveling universe. We do know, however, that we do not live in that universe.
That time-traveling universe is interesting. Physics question: Is it at all possible, never mind how likely, that our own universe contains closed timelike curves? What about closed timelike curves that we can feasibly exploit?
Very very very unlikely. Hawking once wrote a paper called Chronology protection conjecture, arguing that any time loop would self-destruct before forming. Even if they existed, it’s not like you can travel them to do things. There is no entering or exiting the loop. Everything in the groundhog day has been there forever and will be there forever. Because there is no difference between “first time through the loop” and “n-th time through the loop”. This is all predicated on classical general relativity. Quantum gravity may change things. But no one knows anything about quantum gravity.
Thanks for the info. Hmm. What do you mean by “There is no entering or exiting the loop?” Could the loop be big enough to contain us already?
I’m not concerned about traveling backwards in time to change the past; I just want to travel backwards in time. In fact, I hope that I wouldn’t be able to change the past. Consistency of that sort can be massively exploited.
Note the reversibility issue. After a complete loop, you end up in exactly the same state as you started with. All the dissipated energy and information must somehow return. Unless you are willing to wait for the Poincare recurrence, this is not very likely to happen. And in this case the wait time is so large as to be practically infinite.
If we are in a time loop we won’t be trying to escape it, but rather exploit it.
For example: Suppose I find out that the entire local universe-bubble is in a time loop, and there is a way to build a spaceship that will survive the big crunch in time for the next big bang. Or something like that.
Well, I go to my backyard and start digging, and sure enough I find a spaceship complete with cryo-chambers. I get in, wait till the end of the universe, and then after the big bang starts again I get out and seed the Earth with life. I go on to create a wonderful civilization that keeps to the shadows and avoids contact with “mainstream” humanity until, say, the year 2016. In the year 2014 of course, my doppelganger finds the machine I buried in his backyard...
I’m not saying this scenario is plausible, just that it is an example of exploiting time travel despite never breaking the loop. Or am I misunderstanding how this works?
As long as every loop is identical to every other loop, why not. I don’t know if you would call it “exploit”, since, classically, your actions are predefined (and already happened infinitely many times), and quantum-mechanically, they are determined up to a chance (or, equivalently, every outcome happens in one of the Everett branches).
Garbage in garbage out. There no reason that you get meaningful information in that way.
We know that while twins often have a similar personality they are still different.
I’m not sure I agree with your analysis of the first—it is reasonable to assume that when a person generates pseudorandom noise they are masking a ‘signal’ with some amount of true randomness; we don’t know enough to say for absolute certain that the input is totally garbage and we have good reason to believe people are actually very bad at generating random numbers. Contrast that to—for example—the fact that we have pretty good reasons to think that bringing someone back from the dead is a hard project and I don’t think you’re fairly applying the same criteria across preservation methods.
A cryo frozen brain contains magnitudes of information. Even if we can’t revive it directly we could slice it and scan it for much of the information.
On the other hand the randomly hacking on a keyboard might reveal a few patterns but those patterns don’t tell you how billions of neurons are connected with each other.
The string generated would say much more about the keyboard used to type it than any properties of the typist’s mind. :/
My take on it wouldn’t be so much that it’s unlikely to contain meaningful information as that it’s unlikely to contain enough meaningful information. Whatever (almost certainly very bad) PRNG function you’re implementing when you type out random strings, it’s not going to leak more than a bit of brain state per bit of output, and most likely very much less than that. Humans have tens of billions of neurons and up to about 10^15 synapses; even under stupidly optimistic assumptions about neurological information storage and state sampling, getting all of that out would take many lifetimes’ worth of typing.
I basically agree with you that the strategy seems pretty unlikely. But I think you are over-harsh on it; you don’t need to reconstruct the entire brain, just the stuff that deals with personal identity. If you can select from any one of thirty keys on your keyboard then every ten letters you type has 10^15 bits of entropy, so it seems possible that if somebody knew absolutely everything about the state you were in when typing they could reconstruct you just from this. You are also not restricted to tapping away randomly—I suspect words or sentences would leak way more the pseudorandom tapping. At any rate, this strategy is almost free, so you’d need astonishingly good reasons not to attempt it if you plan on attempting cryonics.
I think those reasons exist (I’m skeptical the information would survive) but I don’t think the theory is quite as much in the lunatic fringe as you do.
A little less than 50 bits of entropy, actually, if you’re choosing truly randomly. Total entropy of a sequence scales additively with additional choices, not multiplicatively: four coin tosses generate four bits of entropy. 50 bits is enough to specify an option from a space of around 10^15, but the configuration space of those 10^10-11 neurons in the human brain is vastly larger than that.
I didn’t know that. Fair enough, seems likely ‘signal preservation’ is much more costly than I originally realised and not worth pursuing (I think the likelihood of revivification is the same or better than cryonics, but the cost in terms of hours spent tapping at a keyboard is basically more than any human could pay in one lifetime)
You assume that keystrokes are independent from each other. They aren’t.
They don’t.
You probably get a lot more relevant information about yourself by using various QS tools like MyBasis. I personally didn’t get a MyBasis but I have preordered Angel.
My Anki review data pile, produces a bunch of information about my brain state that’s of much higher quality than random keyboard typing.
Anki data in addition to heart rate data from Angel tells you which Anki cards produced hypothalamic reactions.
Also, avoiding dying in ways that destroy brain state. I’m not sure how probable those are, or how easy they are to avoid, and if that includes dementia (and so on) it gets rather common and tricky.
This is very true. I agonised about including a, ‘Structure your life in such a way that your minimise the probability of a death which destroys your brain’ option, but decided in the end that a pedant could argue that such a change to your lifestyle might decrease your total lifetime utility and so isn’t worth it for certain probabilities of cryonics’ success.
Supplemental data preservation seems like a synergistic match with cryonics. You’d want to collect vast amounts of data with little effort, so no diaries or random typing or asking friends to memorize facts. MRIs and other medical records might help, keeping a video or audio recording of everything you do, and recording everything you do with your computer, should take little time and might preserve something that might aid cryonic preservation.
Simulation-based preservation attempts may be more likely than people expect, based on the logic that simulated humans likely outnumber physical humans (we could be in a simulation to determine how many simulations per human we will eventually make ourselves). However it is clear that the simulator(s) either already are communicating with us or do not care to, and to gain any more direct access to their attention we’d have to hack the simulation, in which case there may be more clever things to do than call attention to our hacking. However, it is likely that the simulators have highly advanced security technology compared to us. Alternately, given that we are probably being simulated by other humans, and they might be watching, we may be able to appeal to their empathy.
Evolutionary Preservation and Genetic Preservation depend on a misunderstanding of genetics, Philosophical Preservation on a misunderstanding of the natures of reality vs rationalization, and Time-travel Preservation suggests that making a commitment to something that 10%-50% of humans already made will make you notable to time travelers. This sort of thing detracts from your suggestion since you’re grasping at straws to find alternatives.
Granted, it’s hard to find alternatives. I suppose EEG data could be collected as well, and would also have research benefits. However, like most of the other data that could be collected, it would probably only suffice as a sanity check on your cryonic reconstruction.
I don’t disagree I was grasping at straws for some of the more outlandish suggestions, but this was deliberate—to try and explore the full boundaries of the strategy space. So I take most of your criticism in the constructive spirit in which it was intended, but I do think maybe you are a bit confused about ‘philosophical preservation’ (no doubt I explained it very badly to avoid using the word ‘religion’). My point is not that you convince yourself, “I will live forever because all life is meaningless and hence death is the same as life”, it is that you find some philosophical argument that indicates a plausible strategy and then do that strategy. A simple example would be that you discover an argument which really genuinely proves Christianity offers salvation and then get baptised, or prove to your satisfaction that the soul is real and then pay a medium to continue contacting you after you die. Again, I agree this is outlandish but there must be something appealing about the approach because it is unquestionably the most popular strategy on the list in a worldwide sense.
That should be Bizarro rather than Bizzaro, I think.
It seems to me that given all the causes of premature death (e.g. lifetime fatal cancer rate of 20%), cryonics and the other strategies of yours, taken together, are utterly insignificant in comparison with a healthy lifestyle that decreases the risk of premature death by even one percent.
Of course, some people assign very high probability to cryonics as is presently implemented. It may not be a harmless belief if there’s risk compensation at play—it’d be interesting to correlate BMI with the estimated probability of cryonics working, among those trying to live forever.
With regards to the solution space for the problem of preserving information in the brain, it’s as huge as all possible chemical preservative mixes, and it seems to me that somewhere within that space there’s a solution along the lines of cutting the brain into small pieces and putting them into a jar of formaldehyde or a similar cross linking preservative. Very cheap and consequently of no interest to the sort of people who offer to “cryopreserve” bodies that have been stored in dry ice for 2 weeks.
And on a wider scale, there’s young people dying of preventable causes right now, who can be saved by use of far less $ . So it’s just not worth it to work on brain preservation for charitable reasons.
I’ve seen discussion of Diarist (and maybe MRI) on LW before. None of the others seem to me to be plausible enough to even bother considering, and even those seem primarily only useful as a supplement to for the people reconstructing you from cryonics.