I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass. And it’s unfair to compare evolutionary evidence with cryonics evidence, where the former has been explicitly, clearly documented and tested (as in dog breeding), whereas the latter (as I keep repeating myself) has had absolutely no testing whatsoever.
Evidence is required to be explicitly linked, in the general scientific community. Testing like “bringing a dog back from a low-blood-pressure state for 15 minutes while under intensive pharmacological life support” does not qualify that human cryonic regeneration will work. It qualifies a certain optimism, but (in my mind) this optimism appears to have taken over in the kind of context of the discussion we’re having here.
In the physics community, or in the mainstream scientific community in general, the cryonics argument doesn’t pass, for these reasons. (You might argue that the mainstream scientific community is just ignorant to the cryonics idea, but again, that’s a testament to the same conclusion.)
The reason for the difference in “evidence” is because test-evidence is crucial for scientific justification. There’s a deep, wide difference between strong plausibility and demonstrated truth. That’s why, for example, the whole GAI and singularity science receives so little attention. It’s all extremely plausible, but plausibility alone doesn’t count. It doesn’t actually count for very much, and that’s why this is a forum, not an academic conference. To me, it definitely doesn’t count for paying hundreds or thousands of dollars—and that’s assuming that the price doesn’t jump.
For the cryonics argument, the assumption leaps that you require to go from plausibility-to-fact include, for example, nanotechnological repair techniques, which actually don’t even exist in any form today. You can argue about plausibility all you want (like AngryParsley) has done, but that’s only effective insofar as demonstrating further plausibility. It doesn’t actually take you anywhere, and it doesn’t provide any evidence of the sort that you need to do real science (i.e., publish papers.)
And I appreciate your concern about my down votes, but it’s OK, I think I’m happily doomed to a constant zero karma status.
Then you are, in a technical sense, crazy. The rules of probability theory are rules, not suggestions. No one gets to modify what counts as “rational evidence”, not even science; it’s given by Bayes’s Theorem. If you make judgments based on other criteria like “not believing anything strange can happen, regardless of what existing theories predict when extrapolated, unless I see definite confirmation, regardless of the reasonableness of expecting definite confirmation right now even given that the underlying theory is correct”, you’ll end up with systematically wrong answers. Same thing if you try to decide in defiance of expected utility, using a rule like “ignore the stakes if they get large enough” or “ignore any outcome that has not received definite advance confirmation regardless of its probability under a straight extrapolation of modern science”.
I very easily believe that you think I’m crazy. Then again, you’re the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.
Your idea of “data extrapolation” is drastically different from mine. I’m using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way “superior”, it does have a much higher degree of certainty, and I certainly don’t feel that that’s somehow “crazy”.
If I’m crazy, then the entire scientific community is crazy. All I’m doing is requiring the standard level of data before I believe someone’s claim to be correct. And this standard is very, very fruitful, I might add.
Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.
I don’t see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.
Evidence is knowledge that supports a conclusion. It isn’t anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.
Ironically, you’re mistake my assertion in the same manner that you think I’m mistaking others’. It’s correct people aren’t claiming certainty of beliefs, but neither am I. In fact, I’m supporting uncertainty in the probabilities, and that’s all.
I haven’t once claimed that cryonics won’t work, or can’t work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven’t even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.
You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.
One’s actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.
It’s not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn’t excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like “dying needlessly” on the one hand and “wasting money pointlessly” on the other.
You might legitimately come to the conclusion that you don’t like the odds when compared with the present cost and with respect to your priorities in life. But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
I might be missing a bit of jargon here, but what would it mean for a heuristic to be “unsound”? Do you mean it performs badly against other competing heuristics?
It sounds to me like a heuristic that should work pretty well, and I’m not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as “gambling money is bad”.
Uncertainties in the probabilities can’t be absorbed into a single, more conservative probability? If I’m 10% sure that someone’s estimate that cryonics has a 50% likelihood of working is well calibrated, isn’t that the same as being 5% sure that cryonics is likely to work?
My calculation was simplistic, you’re right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person’s estimate is well calibrated or underconfident.
ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.
The point I’m making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn’t be epsilon any more than the expectation of the lottery value need be epsilon.
Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you’ve wasted your $5. Do you take the option?
There is—as far as I know—no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory—a decision very similar to the hypothetical monetary option.
devoting your life to philosophy in the quest of achieving a technical, non-philosphical goal
Less Wrong reads like philosophy, but that’s because it’s made up of words. As I understand it, Eliezer’s real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.
There’s a deep, wide difference between strong plausibility and demonstrated truth.
This right here is the root of the disagreement. There is no qualitative difference between a “strong plausibility” and a “demonstrated truth”. Beliefs can only have degrees of plausibility (aka probability) and these degrees never attain certainty. A “demonstrated truth” is just a belief whose negation is extremely implausible.
The problem is that what you call “strong plausibility” is entirely subjective until you bring evidence onto the table, and that’s the point I’m trying to make.
Edit: In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don’t believe either one until I see some evidence.
The problem is that what you call “strong plausibility” is entirely subjective until you bring evidence onto the table, and that’s the point I’m trying to make.
A piece of evidence regarding a claim is any information, of whatever sort, that ought to affect the plausibility (aka probability) that you assign to the claim. If that evidence makes the claim “strongly plausible”, then that’s an objective fact about how the laws of probability require you to update on the evidence. There is nothing subjective about it.
In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don’t believe either one until I see some evidence.
If you thought that garden gnomes were strongly plausible, then either
you had the misfortune to start with a very inaccurate prior,
you had the misfortune to be exposed to highly misleading evidence, or, most likely,
you failed to apply the rules of probable inference correctly to the evidence you received.
None of these need be the case for you to think that cryonics is plausible enough to justify the expense. That is the difference between cryonics and garden gnomes.
And I appreciate your concern about my down votes, but it’s OK, I think I’m happily doomed to a constant zero karma status.
So that’s why you chose your moniker...
Seriously, though, I doubt it. You’re a contrarian here on cryonics and AI, but ISTM most of the downvotes have been due to misinterpretation or miscommunication (in both directions, but it’s easier for you to learn our conversational idiosyncrasies than vice versa) rather than mere disagreement. As soon as you get involved in some discussions on other topics and grow more accustomed to the way we think and write, your karma will probably drift up whether you like it or not.
I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass.
Yes, that’s very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.
But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn’t be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.
I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass. And it’s unfair to compare evolutionary evidence with cryonics evidence, where the former has been explicitly, clearly documented and tested (as in dog breeding), whereas the latter (as I keep repeating myself) has had absolutely no testing whatsoever.
Evidence is required to be explicitly linked, in the general scientific community. Testing like “bringing a dog back from a low-blood-pressure state for 15 minutes while under intensive pharmacological life support” does not qualify that human cryonic regeneration will work. It qualifies a certain optimism, but (in my mind) this optimism appears to have taken over in the kind of context of the discussion we’re having here.
In the physics community, or in the mainstream scientific community in general, the cryonics argument doesn’t pass, for these reasons. (You might argue that the mainstream scientific community is just ignorant to the cryonics idea, but again, that’s a testament to the same conclusion.)
The reason for the difference in “evidence” is because test-evidence is crucial for scientific justification. There’s a deep, wide difference between strong plausibility and demonstrated truth. That’s why, for example, the whole GAI and singularity science receives so little attention. It’s all extremely plausible, but plausibility alone doesn’t count. It doesn’t actually count for very much, and that’s why this is a forum, not an academic conference. To me, it definitely doesn’t count for paying hundreds or thousands of dollars—and that’s assuming that the price doesn’t jump.
For the cryonics argument, the assumption leaps that you require to go from plausibility-to-fact include, for example, nanotechnological repair techniques, which actually don’t even exist in any form today. You can argue about plausibility all you want (like AngryParsley) has done, but that’s only effective insofar as demonstrating further plausibility. It doesn’t actually take you anywhere, and it doesn’t provide any evidence of the sort that you need to do real science (i.e., publish papers.)
And I appreciate your concern about my down votes, but it’s OK, I think I’m happily doomed to a constant zero karma status.
Then you are, in a technical sense, crazy. The rules of probability theory are rules, not suggestions. No one gets to modify what counts as “rational evidence”, not even science; it’s given by Bayes’s Theorem. If you make judgments based on other criteria like “not believing anything strange can happen, regardless of what existing theories predict when extrapolated, unless I see definite confirmation, regardless of the reasonableness of expecting definite confirmation right now even given that the underlying theory is correct”, you’ll end up with systematically wrong answers. Same thing if you try to decide in defiance of expected utility, using a rule like “ignore the stakes if they get large enough” or “ignore any outcome that has not received definite advance confirmation regardless of its probability under a straight extrapolation of modern science”.
I very easily believe that you think I’m crazy. Then again, you’re the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.
Your idea of “data extrapolation” is drastically different from mine. I’m using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way “superior”, it does have a much higher degree of certainty, and I certainly don’t feel that that’s somehow “crazy”.
If I’m crazy, then the entire scientific community is crazy. All I’m doing is requiring the standard level of data before I believe someone’s claim to be correct. And this standard is very, very fruitful, I might add.
Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.
I don’t see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.
Evidence is knowledge that supports a conclusion. It isn’t anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.
Ironically, you’re mistake my assertion in the same manner that you think I’m mistaking others’. It’s correct people aren’t claiming certainty of beliefs, but neither am I. In fact, I’m supporting uncertainty in the probabilities, and that’s all.
I haven’t once claimed that cryonics won’t work, or can’t work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven’t even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.
You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.
One’s actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.
It’s not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn’t excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like “dying needlessly” on the one hand and “wasting money pointlessly” on the other.
You might legitimately come to the conclusion that you don’t like the odds when compared with the present cost and with respect to your priorities in life. But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
A good point.
I might be missing a bit of jargon here, but what would it mean for a heuristic to be “unsound”? Do you mean it performs badly against other competing heuristics?
It sounds to me like a heuristic that should work pretty well, and I’m not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as “gambling money is bad”.
Uncertainties in the probabilities can’t be absorbed into a single, more conservative probability? If I’m 10% sure that someone’s estimate that cryonics has a 50% likelihood of working is well calibrated, isn’t that the same as being 5% sure that cryonics is likely to work?
You’re assigning 0% probability to (cryonics_working|estimate_miscalibrated). Therefore you should buy the lottery ticket.
My calculation was simplistic, you’re right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person’s estimate is well calibrated or underconfident.
ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.
The point I’m making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn’t be epsilon any more than the expectation of the lottery value need be epsilon.
I agree. That’s why I said so previously =P
Potentially more—perhaps their process for calibration is poor, but the answer coincidentally happens to be right.
Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you’ve wasted your $5. Do you take the option?
There is—as far as I know—no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory—a decision very similar to the hypothetical monetary option.
Less Wrong reads like philosophy, but that’s because it’s made up of words. As I understand it, Eliezer’s real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.
This right here is the root of the disagreement. There is no qualitative difference between a “strong plausibility” and a “demonstrated truth”. Beliefs can only have degrees of plausibility (aka probability) and these degrees never attain certainty. A “demonstrated truth” is just a belief whose negation is extremely implausible.
The problem is that what you call “strong plausibility” is entirely subjective until you bring evidence onto the table, and that’s the point I’m trying to make.
Edit: In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don’t believe either one until I see some evidence.
A piece of evidence regarding a claim is any information, of whatever sort, that ought to affect the plausibility (aka probability) that you assign to the claim. If that evidence makes the claim “strongly plausible”, then that’s an objective fact about how the laws of probability require you to update on the evidence. There is nothing subjective about it.
If you thought that garden gnomes were strongly plausible, then either
you had the misfortune to start with a very inaccurate prior,
you had the misfortune to be exposed to highly misleading evidence, or, most likely,
you failed to apply the rules of probable inference correctly to the evidence you received.
None of these need be the case for you to think that cryonics is plausible enough to justify the expense. That is the difference between cryonics and garden gnomes.
So that’s why you chose your moniker...
Seriously, though, I doubt it. You’re a contrarian here on cryonics and AI, but ISTM most of the downvotes have been due to misinterpretation or miscommunication (in both directions, but it’s easier for you to learn our conversational idiosyncrasies than vice versa) rather than mere disagreement. As soon as you get involved in some discussions on other topics and grow more accustomed to the way we think and write, your karma will probably drift up whether you like it or not.
Yes, that’s very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.
But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn’t be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.