If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.
The comments aren’t hilarious and insane, they’re more of a casually dismissive nature. In this thread, cryonics is called an “afterlife scam”, a pseudoscience, science fiction (technically true at this stage, but there’s definitely an implied negative connotation on the “fiction” part, as if you shouldn’t invest in cryonics because it’s just nerd fantasy), and Pascal’s Wager for atheists (The comparison is fallacious, and I thought the original Pascal’s Wager was for atheists anyways...). There are a few criticisms that it’s selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.
All in all, a delightful cornucopia of irrationality.
ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don’t think they were well-received.
The husband in that article sounded like an annoying nerd. Would I want to be frozen and wake up in a world run by these annoying douchebags? His ‘futurecracy’ idea seems idiotic (and also unworkable)
I guess that the fatal problem with cryonics is all the freaking nerds interested in it.
The responses are interesting. I think this is the most helpful to my understanding:
I’m getting sort of tired arguing about the futility of current cryogenics, so I won’t.
I will state that, if my spouse fell for some sort of afterlife scam that cost tens of thousands of dollars, I WOULD be angry.
“This is not a hobby or conversation piece,” he wrote in 1968, adding, “it is the struggle for survival. Drive a used car if the cost of a new one interferes. Divorce your wife if she will not cooperate.
Scientology urges the exact same thing. posted by muddgirl at 5:52 PM on July 11
I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.
um… why isn’t it?
There’s a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you’re blowing money on mere possibility.
We don’t normally make bets that depend on the future development of currently unknown technologies. We aren’t all investing in cold fusion just because it would be really awesome if it panned out.
Sorry, I know this is a cryonics-friendly site, but somebody’s got to say it.
There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.
I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.
However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.
May I suggest also that we be careful to distinguish cold fusion from fusion in general? Cold fusion is extremely unlikely. Hot fusion reactors whether laser confinement or magnetic confinement already exist, the only issue is getting them to produce more useful energy than you put in. This is very different than cold fusion where the scientific consensus is that there’s nothing fusing.
That’s ok, it’s a skepticism friendly site as well.
I don’t see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
I don’t see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
Well, if you think there’s a decent probability for cryonics to turn out then investing in pretty much anything long-term becomes much more likely to be personally beneficial. Indeed, research in general increases the probability that cryonics will end up working (since it reduces the chance of catastrophic events or social problems and the like occurring before the revival technology is reached). The problem with cold fusion is that it is extremely unlikely to work given the data we have. I’d estimate that it is orders of magnitude more likely that say Etale cohomolgy turns out to have a practical application than it is that cold fusion will turn out to function. (I’m picking Etale cohomology as an example because it is pretty but very abstract math that as far as I am aware has no applications and seems very unlikely to have any applications for the foreseeable future).
You don’t think it likely that etale cohomology will be applied to cryptography? I’m sure there are papers already claiming to apply it, but I wouldn’t want to evaluate them. Some people describe it as part of Schoof’s algorithm, but I’m not sure that’s fair. (or maybe you count elliptic curve cryptography as whimsy—it won’t survive quantum computers any longer than rsa)
Yeah, ok. That may have been a bad example, or it may be an indication that everything gets some application. I don’t know how it relates to Schoof’s algorithm. It isn’t as far as I’m aware used in the algorithm or in the correctness proof but this is stretching my knowledge base. I don’t have enough expertise to evaluate any claims about applying Etale cohomology to cryptography.
I’m not sure what to replace that example with. Stupid cryptographers going and making my field actually useful to people.
There’s always a way to estimate how likely something is, even if it’s not a very accurate prediction.
And mere used like seems kinda like a dark side word, if you’ll excuse me.
Cryonics is theoretically possible, in that it isn’t inconsistant with science/physics as we know it so far. I can’t really delve into this part much, as I don’t know anything about cold fusion and thus can’t understand the comparison properly, but it sounds as if it might be inconsistant with physics?
Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion.
And this is just the impression I get, but it sounds like you’re being a contrarian contrarian. I think it’s your last sentence: it made me think of Lonely Dissent.
The unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist’s reasoning than criticizing a religious person’s. It’s easy to criticize people who tell you they don’t mind. The result is that it’s those who need constructive criticism the most who get the least. I’ll admit I fall into this trap sometimes.
(belated reply:) You’re right about the openness to criticism part, but there’s another thing that goes with it: the communities that value critical thinking will respond to criticism by thinking more, and on occasion this will literally lead to the consensus reversing on the specific question. Without a strong commitment to rationality, however, frequently criticism is met by intransigence instead, even when it concerns the idea rather than the person.
Yes, people caught in anti-epistemological binds get less criticism—but they usually don’t listen to criticism, either. Dealing with these is an unsolved problem.
Well, right off the bat, there’s a difference between “cryonics is a scam” and “cryonics is a dud investment”. I think there’s sufficient evidence to establish the presence of good intentions—the more difficult question is whether there’s good evidence that resuscitation will become feasible.
But with no way to estimate how likely it is, you’re blowing money on mere possibility.
You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It’s just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5.
Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can’t be too close to 0 or 1. But not having enough evidence doesn’t excuse you from having to estimate the probability of something before accepting or rejecting it.
I’m not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X.
It’s not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don’t you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.
Example: If you hear someone whom you don’t specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don’t know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.
Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.
I wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations?
I’m guessing it’s rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one’s preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.
Once you are beyond the most elementary of statements I really don’t think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies (“You look great.” “That’s a great gift.” “I don’t have any money with me.”) or misremembered information.
In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn’t make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn’t really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general).
Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends.
Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn’t exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated “fallaciously”, are probably more likely to be true.
But with no way to estimate how likely it is, you’re blowing money on mere possibility.
There isn’t no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.).
Moreover, it is always possible to estimate something’s likelyhood, and one cannot simply say “oh, this is difficult to estimate accurately, so I’ll assign it a low probability.” For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can’t both have low probabilities.
Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount.
Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?
If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.
The comments aren’t hilarious and insane, they’re more of a casually dismissive nature. In this thread, cryonics is called an “afterlife scam”, a pseudoscience, science fiction (technically true at this stage, but there’s definitely an implied negative connotation on the “fiction” part, as if you shouldn’t invest in cryonics because it’s just nerd fantasy), and Pascal’s Wager for atheists (The comparison is fallacious, and I thought the original Pascal’s Wager was for atheists anyways...). There are a few criticisms that it’s selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.
All in all, a delightful cornucopia of irrationality.
ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don’t think they were well-received.
Also, check out this hilarious description of Robin Hanson from a commenter there:
I guess that the fatal problem with cryonics is all the freaking nerds interested in it.
The responses are interesting. I think this is the most helpful to my understanding:
I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.
um… why isn’t it? There’s a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you’re blowing money on mere possibility.
We don’t normally make bets that depend on the future development of currently unknown technologies. We aren’t all investing in cold fusion just because it would be really awesome if it panned out.
Sorry, I know this is a cryonics-friendly site, but somebody’s got to say it.
There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.
I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.
However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.
Don’t read too much into my fusion analogy; you’re right that cryonics is different than fusion.
May I suggest also that we be careful to distinguish cold fusion from fusion in general? Cold fusion is extremely unlikely. Hot fusion reactors whether laser confinement or magnetic confinement already exist, the only issue is getting them to produce more useful energy than you put in. This is very different than cold fusion where the scientific consensus is that there’s nothing fusing.
… and different to almost any other unproven technology (for the exact same reason).
That’s ok, it’s a skepticism friendly site as well.
I don’t see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
Well, if you think there’s a decent probability for cryonics to turn out then investing in pretty much anything long-term becomes much more likely to be personally beneficial. Indeed, research in general increases the probability that cryonics will end up working (since it reduces the chance of catastrophic events or social problems and the like occurring before the revival technology is reached). The problem with cold fusion is that it is extremely unlikely to work given the data we have. I’d estimate that it is orders of magnitude more likely that say Etale cohomolgy turns out to have a practical application than it is that cold fusion will turn out to function. (I’m picking Etale cohomology as an example because it is pretty but very abstract math that as far as I am aware has no applications and seems very unlikely to have any applications for the foreseeable future).
You don’t think it likely that etale cohomology will be applied to cryptography? I’m sure there are papers already claiming to apply it, but I wouldn’t want to evaluate them. Some people describe it as part of Schoof’s algorithm, but I’m not sure that’s fair. (or maybe you count elliptic curve cryptography as whimsy—it won’t survive quantum computers any longer than rsa)
Yeah, ok. That may have been a bad example, or it may be an indication that everything gets some application. I don’t know how it relates to Schoof’s algorithm. It isn’t as far as I’m aware used in the algorithm or in the correctness proof but this is stretching my knowledge base. I don’t have enough expertise to evaluate any claims about applying Etale cohomology to cryptography.
I’m not sure what to replace that example with. Stupid cryptographers going and making my field actually useful to people.
There’s always a way to estimate how likely something is, even if it’s not a very accurate prediction. And mere used like seems kinda like a dark side word, if you’ll excuse me.
Cryonics is theoretically possible, in that it isn’t inconsistant with science/physics as we know it so far. I can’t really delve into this part much, as I don’t know anything about cold fusion and thus can’t understand the comparison properly, but it sounds as if it might be inconsistant with physics?
Possibly relevant: Is Molecular Nanotechnology Scientific?
Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion.
And this is just the impression I get, but it sounds like you’re being a contrarian contrarian. I think it’s your last sentence: it made me think of Lonely Dissent.
The unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist’s reasoning than criticizing a religious person’s. It’s easy to criticize people who tell you they don’t mind. The result is that it’s those who need constructive criticism the most who get the least. I’ll admit I fall into this trap sometimes.
(belated reply:) You’re right about the openness to criticism part, but there’s another thing that goes with it: the communities that value critical thinking will respond to criticism by thinking more, and on occasion this will literally lead to the consensus reversing on the specific question. Without a strong commitment to rationality, however, frequently criticism is met by intransigence instead, even when it concerns the idea rather than the person.
Yes, people caught in anti-epistemological binds get less criticism—but they usually don’t listen to criticism, either. Dealing with these is an unsolved problem.
Well, right off the bat, there’s a difference between “cryonics is a scam” and “cryonics is a dud investment”. I think there’s sufficient evidence to establish the presence of good intentions—the more difficult question is whether there’s good evidence that resuscitation will become feasible.
You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It’s just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5.
Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can’t be too close to 0 or 1. But not having enough evidence doesn’t excuse you from having to estimate the probability of something before accepting or rejecting it.
I’m not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It’s not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don’t you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.
Example: If you hear someone whom you don’t specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don’t know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.
Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.
I wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations?
I’m guessing it’s rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one’s preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.
Once you are beyond the most elementary of statements I really don’t think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies (“You look great.” “That’s a great gift.” “I don’t have any money with me.”) or misremembered information.
In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn’t make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn’t really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general).
Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends.
Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn’t exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated “fallaciously”, are probably more likely to be true.
There isn’t no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.).
Moreover, it is always possible to estimate something’s likelyhood, and one cannot simply say “oh, this is difficult to estimate accurately, so I’ll assign it a low probability.” For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can’t both have low probabilities.
That’s true; uncertainty about A doesn’t make A less likely. It does, however, make me less likely to spend money on A, because I’m risk-averse.
Have you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?
Maybe $50. That’s around the most I’d be willing to accept losing completely.
Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount.
Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?
I wonder how much money it would cost to commission the required science and marketing to get 10^5 cryopreserved people?
I welcome your guesses.
My guess ROT13′d
V guvax gung vg jbhyq pbfg nebhaq bar uhaqerq zvyyvba qbyynef bire n crevbq bs guvegl lrnef