Cryonics has a more serious problem which I seldom see addressed. I’ve noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth—yet they seem totally unconcerned about the fact that we just don’t see this alleged trend happening in cryonics technology, despite its numerous inadequacies. In fact, Mike Darwin argues that the quality of cryopreservations has probably regressed since the 1980′s.
In other words, attempting the cryogenic preservation of the human brain in a way which makes sense to neuroscientists, which should become the real focus of the cryonics movement, has a set of solvable, or at least describable, problems which current techniques could go a long way towards solving without having to invoke speculative future technologies or friendly AI’s. Yet these problems have gone unsolved for decades, and not for the lack of financial resources. Just look at some wealthy cryonicists’ plans to waste $100 million or more building that ridiculous Timeship (a.k.a. the Saulsoleum) in Comfort Texas.
What brought about this situation? I’ve made myself unpopular by suggesting that we can blame cryonics’ association with transhumanism, and especially with the now discredited capital-N Nanotechnology cultism Eric Drexler created in the 1980′s. Transhumanists and their precursors have a history of publishing nonsensical predictions about how we’ll “become immortal” by arbitrary dates within the life expectancies of the transhumanists who make these forecasts. (James D. Miller does this in his Singularity Rising book. I leave articulating the logical problem with this claim as an exercise to the reader). Then one morning we read in our email that one of these transhumanists has died according to actuarial expectations, and possibly went into cryo, like FM-2030; or simply died in the ordinary way, like the Extropian Robert Bradbury.
In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see. And cryonics has become a casualty of this screwed up world view, when it didn’t have to turn out that way. Why exert yourself to improve cryonics’ scientific credibility—again, in ways which neuroscientists would have to take seriously—when you believe that friendly AI’s, Drexler’s genie-like nanomachines and the technological singularity will solve your problems in the next 20-30 years? And as a bonus, this wonderful world in 2045 or so will also revive almost all the cryonauts, no matter how badly damaged their brains.
Well, I don’t consider this a feasible “business plan” for my survival by cryotransport. And I know some other cryonicists who feel similarly. Cryonics needs some serious rebooting, and I’ve started to give some thought about regarding how I can get involved in the effort once I can find the people who look like they can make a go of it.
Presumably, the implication is that these predictions are not based on facts, but had their bottom line written first, and then everything else added later.
[I make no endorsement in support or rejection of this being a valid conclusion, having given it very little personal thought, but this being the issue that advancedatheist was implying seems fairly obvious to me.]
I can’t say on behalf of advancedatheist, but others who I’ve heard make similar statements generally seem to base them on a manner of factor analysis; namely, assuming that you’re evaluating a statement by a self-proclaimed transhumanist predicting the future development of some technology that currently does not exist, the factor which best predicts what date that technology will be predicted as is the current age of the predictor.
As I’ve not read much transhumanist writing, I have no real way to evaluate whether this is an accurate analysis, or simply cherry picking examples of the most egregious/popularly published examples (I frequently see Kurzweil and… mostly just Kurzeil, really, popping up when I’ve heard this argument before).
[As an aside, I just now, after finishing this comment, made the connection that you’re the author that he cited as the example, rather than just a random commenter, so I’d assume you’re much more familiar with the topic at hand than me.]
I’ve noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth—yet they seem totally unconcerned about the fact that we just don’t see this alleged trend happening in cryonics technology, despite its numerous inadequacies.
The problem of people compartmentalizing between what they think is valuable and what they ought to be working on is pretty universal. That being said, it does make cryonics less likely to succeed, and thus worth less; it’s just a failure mode that might be hard to solve.
In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see.
I believe I’ve seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.
As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.
Originally, I hadn’t seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.
It wasn’t until I discovered Eliezer’s writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.
The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly AI seems unlikely to be solved without careful, deliberate research (which very few people are doing), investing in cryonics without also investing in friendly AI research feels pointless.
In those early years, I could afford to make donations to SIAI (now MIRI), but could not afford a cryonics plan, and certainly could not afford both. As I saw it, I was young. I could afford to wait on the cryonics, but would have the most impact on the future by donating to SIAI immediately. So I did.
That’s the effect Eliezer’s cryonics activism had on me.
I believe I’ve seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.
Which should be fine; an increase in spectator cryonauts is fine as long as it isn’t stealing from the pool of active cryonauts. Since in this case it is making people who wouldn’t have anything to do with cryonics be involved, it is still a good thing.
No one is working on cryonics because there’s no money/interest because no one is signed up for cryonics. Probably the “easiest” way to solve this problem is to convince the general public that cryonics is a good idea. Then someone will care about making it better.
Some rich patron funding it all sounds good, but I can’t think of a recent example where one person funded a significant R&D advance in any field.
In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see. And cryonics has become a casualty of this screwed up world view, when it didn’t have to turn out that way. Why exert yourself to improve cryonics’ scientific credibility—again, in ways which neuroscientists would have to take seriously—when you believe that friendly AI’s, Drexler’s genie-like nanomachines and the technological singularity will solve your problems in the next 20-30 years? And as a bonus, this wonderful world in 2045 or so will also revive almost all the cryonauts, no matter how badly damaged their brains.
applause. If there actually existed a cryopreservation technique that had been proven to really work in animal models—or better yet in human volunteers! - I would go ahead and sign up. But it doesn’t exist, and instead of telling me who’s working on making it exist, people tell me about the chances of successful revival using existing techniques.
I could say the same thing to the FAI effort. Actually, no, I am saying the same thing. Everyone seems to believe that too few people are committed to FAI research, but very few step up to actually volunteer their own efforts, even on a part-time basis, despite much of it still being in the realm of pure mathematics or ethics where you need little more than a good brain, some paper, pens, and lots of spare time to make a possible contribution.
Nu? If everyone has a problem and no-one is doing anything about it… why?
Cryonics has a more serious problem which I seldom see addressed. I’ve noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth—yet they seem totally unconcerned about the fact that we just don’t see this alleged trend happening in cryonics technology, despite its numerous inadequacies. In fact, Mike Darwin argues that the quality of cryopreservations has probably regressed since the 1980′s.
In other words, attempting the cryogenic preservation of the human brain in a way which makes sense to neuroscientists, which should become the real focus of the cryonics movement, has a set of solvable, or at least describable, problems which current techniques could go a long way towards solving without having to invoke speculative future technologies or friendly AI’s. Yet these problems have gone unsolved for decades, and not for the lack of financial resources. Just look at some wealthy cryonicists’ plans to waste $100 million or more building that ridiculous Timeship (a.k.a. the Saulsoleum) in Comfort Texas.
What brought about this situation? I’ve made myself unpopular by suggesting that we can blame cryonics’ association with transhumanism, and especially with the now discredited capital-N Nanotechnology cultism Eric Drexler created in the 1980′s. Transhumanists and their precursors have a history of publishing nonsensical predictions about how we’ll “become immortal” by arbitrary dates within the life expectancies of the transhumanists who make these forecasts. (James D. Miller does this in his Singularity Rising book. I leave articulating the logical problem with this claim as an exercise to the reader). Then one morning we read in our email that one of these transhumanists has died according to actuarial expectations, and possibly went into cryo, like FM-2030; or simply died in the ordinary way, like the Extropian Robert Bradbury.
In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see. And cryonics has become a casualty of this screwed up world view, when it didn’t have to turn out that way. Why exert yourself to improve cryonics’ scientific credibility—again, in ways which neuroscientists would have to take seriously—when you believe that friendly AI’s, Drexler’s genie-like nanomachines and the technological singularity will solve your problems in the next 20-30 years? And as a bonus, this wonderful world in 2045 or so will also revive almost all the cryonauts, no matter how badly damaged their brains.
Well, I don’t consider this a feasible “business plan” for my survival by cryotransport. And I know some other cryonicists who feel similarly. Cryonics needs some serious rebooting, and I’ve started to give some thought about regarding how I can get involved in the effort once I can find the people who look like they can make a go of it.
I would be grateful if you would tell me what the logical problem is.
Presumably, the implication is that these predictions are not based on facts, but had their bottom line written first, and then everything else added later.
[I make no endorsement in support or rejection of this being a valid conclusion, having given it very little personal thought, but this being the issue that advancedatheist was implying seems fairly obvious to me.]
Thanks, if this is true I request advancedatheist explain why he thinks I did this.
I can’t say on behalf of advancedatheist, but others who I’ve heard make similar statements generally seem to base them on a manner of factor analysis; namely, assuming that you’re evaluating a statement by a self-proclaimed transhumanist predicting the future development of some technology that currently does not exist, the factor which best predicts what date that technology will be predicted as is the current age of the predictor.
As I’ve not read much transhumanist writing, I have no real way to evaluate whether this is an accurate analysis, or simply cherry picking examples of the most egregious/popularly published examples (I frequently see Kurzweil and… mostly just Kurzeil, really, popping up when I’ve heard this argument before).
[As an aside, I just now, after finishing this comment, made the connection that you’re the author that he cited as the example, rather than just a random commenter, so I’d assume you’re much more familiar with the topic at hand than me.]
The problem of people compartmentalizing between what they think is valuable and what they ought to be working on is pretty universal. That being said, it does make cryonics less likely to succeed, and thus worth less; it’s just a failure mode that might be hard to solve.
I believe I’ve seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.
As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.
Originally, I hadn’t seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.
It wasn’t until I discovered Eliezer’s writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.
The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly AI seems unlikely to be solved without careful, deliberate research (which very few people are doing), investing in cryonics without also investing in friendly AI research feels pointless.
In those early years, I could afford to make donations to SIAI (now MIRI), but could not afford a cryonics plan, and certainly could not afford both. As I saw it, I was young. I could afford to wait on the cryonics, but would have the most impact on the future by donating to SIAI immediately. So I did.
That’s the effect Eliezer’s cryonics activism had on me.
Which should be fine; an increase in spectator cryonauts is fine as long as it isn’t stealing from the pool of active cryonauts. Since in this case it is making people who wouldn’t have anything to do with cryonics be involved, it is still a good thing.
No one is working on cryonics because there’s no money/interest because no one is signed up for cryonics. Probably the “easiest” way to solve this problem is to convince the general public that cryonics is a good idea. Then someone will care about making it better.
Some rich patron funding it all sounds good, but I can’t think of a recent example where one person funded a significant R&D advance in any field.
“but I can’t think of a recent example where one person funded a significant R&D advance in any field.”
Christopher Reeve funds research into curing spinal cord injury Terry Pratchett funds research into Alzheimer’s I’m sure there are others.
Pratchett’s donation appears to account for 1.5 months of the British funding towards Alzheimer’s (numbers from http://web.archive.org/web/20080415210729/http://www.alzheimers-research.org.uk/news/article.php?type=News&archive=0&id=205, math from me) . Which is great and all, but public funding is way better. So I stand by my claim.
Ok, I stand corrected re: Pratchett. How did you come by the numbers? and can you research Reeve’s impact too?
Until then, you’ve still “heard of one recent example” :)
applause. If there actually existed a cryopreservation technique that had been proven to really work in animal models—or better yet in human volunteers! - I would go ahead and sign up. But it doesn’t exist, and instead of telling me who’s working on making it exist, people tell me about the chances of successful revival using existing techniques.
I could say the same thing to the FAI effort. Actually, no, I am saying the same thing. Everyone seems to believe that too few people are committed to FAI research, but very few step up to actually volunteer their own efforts, even on a part-time basis, despite much of it still being in the realm of pure mathematics or ethics where you need little more than a good brain, some paper, pens, and lots of spare time to make a possible contribution.
Nu? If everyone has a problem and no-one is doing anything about it… why?