I very easily believe that you think I’m crazy. Then again, you’re the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.
Your idea of “data extrapolation” is drastically different from mine. I’m using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way “superior”, it does have a much higher degree of certainty, and I certainly don’t feel that that’s somehow “crazy”.
If I’m crazy, then the entire scientific community is crazy. All I’m doing is requiring the standard level of data before I believe someone’s claim to be correct. And this standard is very, very fruitful, I might add.
Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.
I don’t see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.
Evidence is knowledge that supports a conclusion. It isn’t anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.
Ironically, you’re mistake my assertion in the same manner that you think I’m mistaking others’. It’s correct people aren’t claiming certainty of beliefs, but neither am I. In fact, I’m supporting uncertainty in the probabilities, and that’s all.
I haven’t once claimed that cryonics won’t work, or can’t work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven’t even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.
You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.
One’s actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.
It’s not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn’t excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like “dying needlessly” on the one hand and “wasting money pointlessly” on the other.
You might legitimately come to the conclusion that you don’t like the odds when compared with the present cost and with respect to your priorities in life. But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
I might be missing a bit of jargon here, but what would it mean for a heuristic to be “unsound”? Do you mean it performs badly against other competing heuristics?
It sounds to me like a heuristic that should work pretty well, and I’m not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as “gambling money is bad”.
Uncertainties in the probabilities can’t be absorbed into a single, more conservative probability? If I’m 10% sure that someone’s estimate that cryonics has a 50% likelihood of working is well calibrated, isn’t that the same as being 5% sure that cryonics is likely to work?
My calculation was simplistic, you’re right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person’s estimate is well calibrated or underconfident.
ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.
The point I’m making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn’t be epsilon any more than the expectation of the lottery value need be epsilon.
Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you’ve wasted your $5. Do you take the option?
There is—as far as I know—no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory—a decision very similar to the hypothetical monetary option.
devoting your life to philosophy in the quest of achieving a technical, non-philosphical goal
Less Wrong reads like philosophy, but that’s because it’s made up of words. As I understand it, Eliezer’s real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.
I very easily believe that you think I’m crazy. Then again, you’re the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.
Your idea of “data extrapolation” is drastically different from mine. I’m using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way “superior”, it does have a much higher degree of certainty, and I certainly don’t feel that that’s somehow “crazy”.
If I’m crazy, then the entire scientific community is crazy. All I’m doing is requiring the standard level of data before I believe someone’s claim to be correct. And this standard is very, very fruitful, I might add.
Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.
I don’t see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.
Evidence is knowledge that supports a conclusion. It isn’t anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.
Ironically, you’re mistake my assertion in the same manner that you think I’m mistaking others’. It’s correct people aren’t claiming certainty of beliefs, but neither am I. In fact, I’m supporting uncertainty in the probabilities, and that’s all.
I haven’t once claimed that cryonics won’t work, or can’t work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven’t even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.
You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.
One’s actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.
It’s not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn’t excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like “dying needlessly” on the one hand and “wasting money pointlessly” on the other.
You might legitimately come to the conclusion that you don’t like the odds when compared with the present cost and with respect to your priorities in life. But saying “I’m not sure of the chances, therefore I’ll go with the less weird-sounding choice, or the one that doesn’t require any present effort on my part” is an unsound heuristic in general.
A good point.
I might be missing a bit of jargon here, but what would it mean for a heuristic to be “unsound”? Do you mean it performs badly against other competing heuristics?
It sounds to me like a heuristic that should work pretty well, and I’m not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as “gambling money is bad”.
Uncertainties in the probabilities can’t be absorbed into a single, more conservative probability? If I’m 10% sure that someone’s estimate that cryonics has a 50% likelihood of working is well calibrated, isn’t that the same as being 5% sure that cryonics is likely to work?
You’re assigning 0% probability to (cryonics_working|estimate_miscalibrated). Therefore you should buy the lottery ticket.
My calculation was simplistic, you’re right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person’s estimate is well calibrated or underconfident.
ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.
The point I’m making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn’t be epsilon any more than the expectation of the lottery value need be epsilon.
I agree. That’s why I said so previously =P
Potentially more—perhaps their process for calibration is poor, but the answer coincidentally happens to be right.
Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you’ve wasted your $5. Do you take the option?
There is—as far as I know—no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory—a decision very similar to the hypothetical monetary option.
Less Wrong reads like philosophy, but that’s because it’s made up of words. As I understand it, Eliezer’s real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.