I really liked the recent essay Who are the Experts on Cryonics? by Max More.
Here’s why I think you might like this article:
There are a lot of generally relevant points about when/how to trust experts, not specific to cryonics.
I think it does a really good job at explaining how to identify the relevant experts in cryonics to help you assess the value of current and potential cryonics.
It has a list of interesting famous wrong predictions ‘experts’ made.
Here’s the table of content:
Whose expertise is relevant to evaluating cryonics?
Experts dropping the ball
Factors to consider in evaluating expertise
How experts can be wrong
Caution: Expert ahead!
I hate that we even have the term cryonics. I think that brain presentation is best done without freezing. As someone who has cryonically preserved and studied quite a fair amount of rodent and human brain tissue, cryonics sucks. CLARITY is the way to go. You get a much more stable product that can be non-destructively imaged many times over. It’s stable at room temperature, so you don’t have to trust some company to keep your brain from ever thawing. You can image think slices (around 2 inches), which makes the task of creating a cohesive digital model far easier and less error prone. If you want, you can still do electron microscopy on the tissue after you’re done imaging it with laser microscopy, and then integrate the results. You will gain substantial advantages by doing this over going straight to electron microscopy.
Note that our blog is called “The Biostasis Standard.” Yes, the cryonics term is not ideal. Biostasis subsumes cryonics and I prefer it, but far more people are familiar with “cryonics”, so it will take a long time to transition terms, if it ever happens.
One research project Biostasis Technologies is behind is vitrifixation—cryopreservation combined with chemical fixation. It has some advantages in certain circumstances. Ideally, we want a range of cryo and non-cryo preservation approaches, each of which may be the best for particular situations—such as ischemic time.
I think the emphasis even on preservation is misguided at this point. I think it’s time now to shift emphasis to uploading & emulation.
Chemical fixation is good, sure. But we need to assemble a full map of the neurons and their connections in the brain. The current best way to do this is to image the brain in the relatively thick sections that are allowed when you have chemically fixed and optically clarified the tissue. This greatly facilitates axon tracing since there are many fewer cuts needed. You can have like 4 cm or a bit more thick slices instead of needing slices thinner than tissue paper.
Also, the risk of damage from cryoslicing brain tissue is significant. I’ve accidentally destroyed quite a few cryopreserved brain slices while trying to transfer them to slides. If you are trying to slice an entire non-clarified human brain thin enough for light microscopy, you’ll have quite a lot of risk of damage. The other alternative is hard plasticization and even thinner slicing for electron microscopy. Also a damage risk, and also something you can optionally do to the clarified brain once you are done light-imaging it. The advantage of doing thick-slice light imaging first is that it gives you a map, the full set of 3D positions of all the neurons and their axons & dendrites, which you can then add the additional detail (e.g. synapse strength) from the later electron microscopy. If you don’t have the map, then you have to correctly assemble all those super-thin slices into 3D structure without mismatching axons and dendrites. That’s another huge source of error.
Also, when doing electron microscopy, you must choose one single set of things to label. You don’t get to change your label. There are tens or hundreds of importantly relevant proteins it would be useful to know about when trying to upload a brain. With clarified brain tissue, you can label these proteins a few at a time, image them, wash the labels out and put in new labels. You can safely repeat this process hundreds of times. Thus, you get a much more complete picture of all the proteins. And you can label with multiple label colors at once, which means you can have a ‘reference label’ to which all the others get registered in 3D space on your model. This greatly reduces the localization error in your model.
Humanity is at the hinge of history, and one of the possible paths to safety from the AGI transition is having successful Whole Brain Emulations that give us digital entities we can trust more than alien-minded ML models. Thus, getting this tech working in the next 5-10 years could be critical.
This seems far more important to me than the work of trying to help people preserve their brains so they can be uploaded once humanity has survived the singularity. That’s a thing that impacts just a few people, rather than all of humanity and the future of our descendants throughout our lightcone.
idk what CLARITY is, but yeah, I’d love to see room temperature preservation protocols developed for human brain preservation. it also has the possibility of significantly reducing cost given a significant fraction of the cost goes towards paying for indefinite liquid nitrogen refills
Nectome is working on aldehyde-stabilized cryopreservation for humans which I think might provide some of those benefits (?) OregonCryo is also trying to do / doing something like that.
i know another researcher working on this which could probably use funding in the near future. if any of you know someone that might be interested in funding this, please lmk so I can put you in touch. i think this is one of the top opportunities for improving cryonics robustness and adoption (and maybe quality)
Clarke’s quote is apt, but the rest of the article does not hold all that well together. All you can say about cryonics is that it arrests the decay at the cost of destroying some structures in the process. Whether what is left is enough for eventual reversal, whether biological or technological, is a huge unknown whose probability you cannot reasonably estimate at this time. All we know is that the alternative (natural decomposition) is strictly worse. If someone gives you a concrete point estimate probability of revival, their estimate is automatically untrustworthy. We do not have anywhere close to the amount of data we need to make a reasonable guess.
This goes strongly against probabilistic forecasting. It seems a wrong principle to me.
If someone says “I believe that the probability of cryonic revival is 7%”, what useful information can you extract from it, beyond “this person has certain beliefs”? Of course, if you consider them an authority on the topic, you can decide whether 7% is enough for you to sign up for cryonics. Or maybe because you know them to be well calibrated on a variety of subjects they have expressed probabilistic views on, including topics that have so many unknowns, they have to have some special ineffable insight to be well calibrated on. I am skeptical that there is a reference class like this that includes cryonic revival where one can be considered well calibrated.
It is, at the very least, interesting that people signed up for cryonics tend to give lower estimates for probability of future revival than the general population, and this may give useful insight for both the state of the field (“If you haven’t looked into it, the odds are probably worse than you think.”), and variance in human decision making (“How much do you value increased personal longevity, really?”), and how the field should strive to educate and market and grow.
It could also be interesting and potentially insightful to see how those numbers have changed over time. Even if the numbers themselves are roughly meaningless, any trends in them may reflect advancement of the field, or better marketing, or change in the population signing up or considering doing so. If I had strong reason to think that there were encouraging trends in odds of revival, as well as cost and public acceptance, that would increase my odds of signing up. After all, under most non-catastrophic-future scenarios, and barring personal disasters likely to prevent preservation anyway, I’m much more likely to die in the 2050s-2080s than before that, and be preserved with that decade’s technologies, which means compounding positive trends vs. static odds can make a massive difference to me. OTOH, if we’re not seeing such improvement yet but there’s reason to think we will, then waiting a few years could greatly reduce my costs (relative to early adopters) without dramatically increasing my odds of dying before signing up.
(If we’re really lucky and sane in the coming decades there’s a small chance preservation of some sort will be considered standard healthcare practice by the time I die, but I don’t put much weight on that.)
Your comment creates a misleading impression of my article. Nowhere do I say experts can give a point probability of success. On the contrary, I frequently reject that idea. I also find it silly when people say the probably of AI destroying humans is 20%, or 45%, or whatever.
You don’t provide any support for the claim that “the rest of the article doesn’t hold all that well together”, so I’m unable to respond usefully.