I can’t think of one single case in my experience when the argument “It has a small probability of success, but we should pursue it, because the probability ifwe don’t try is zero” turned out to be a good idea.
That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.
While that shows the lottery is stupid, it doesn’t show that cryonics has made it into smart territory. Things are further complicated by the fact that your odds of winning the lottery are known, certain, and printed on the ticket- your odds of winning the cryonics lottery are fundamentally uncertain.
your odds of winning the cryonics lottery are fundamentally uncertain.
I disagree with ‘fundamentally’. It is no more uncertain than any future event; to call all future events fundamentally uncertain could be true on certain acceptable definitions of fundamental, but it’s hardly a useful word in those cases.
Medical research and testing has been done with cryonics; we have a good idea of exactly what kinds of damage occur during vitrification, and a middling idea of what would be required to fix it. IIRC cryonics institutions remaining in operation, the law not becoming hostile to cryonics, and possible civilization-damaging events (large-scale warfare, natural disasters, etc) are all bigger concerns than the medicine involved. All of these concerns can be quantified.
I am talking about the odds, and even if I were talking about the event, I feel pretty strongly that we can be more certain about things like the sun rising tomorrow than me winning the lottery next week. My odds of winning the with each ticket I buy are 1 in X plus/minus some factor for fraud/finding a winning ticket. That’s a pretty low range of odds. My odds for being revived after cryonics have a much wider range, since the events leading up to it are far more complicated than removing 6 balls from an urn. Hence, fundamental uncertainty because the fundamental aspects of the problem are different in a way that leads to less certainty.
Yes, they have a much wider range, but all mathematical treatments of that range that I’ve seen come out showing the lower limit to be at least a few orders of magnitude greater than the lottery. Even though we are uncertain about how likely cryonics is to work, we are certain it’s more likely than winning the lottery.
No. If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work. Despite various what-ifs, ceteris paribus, adding carbon dioxide to the atmosphere would be expected to produce global warming and you would need specific evidence to contradict that. In the same way, ceteris paribus, vitrification at liquid nitrogen temperatures ought to preserve your brain and preserving your brain ought to preserve your you, and despite various what-ifs, you would need specific evidence to contradict that it because it is implied by the generalizations we already believe about the universe.
Everything you say after the ’No.” is true but doesn’t support your contradiction of:
I can’t think of one single case in my experience when the argument “It has a small probability of success, but we should pursue it, because the probability ifwe don’t try is zero” turned out to be a good idea.
Er … isn’t that the argument for cryonics?
There is no need to defend cryonics here. Just relax the generalisation. I’m surprised you ‘can’t think of a single case in your experience’ anyway. It took me 10 seconds to think of three in mine. Hardly surprising—such cases turn up whenever the payoffs multiply out right.
I think the kind of small probabilities Eliezer was talking about (not that he was specific) here is small in the sense that there is a small probability that evolution is wrong, there is a small probability that God exists, etc.
The other interpretation is something like there is a small probability you will hit your open-ended straight draw (31%). If there are at least two players other than you calling, though, it is always a good idea to call (excepting tournament and all-in considerations). So it depends on what interpretation you have of the word ‘small’.
By the first definition of small (vanishing), I can’t think of a single argument that was a good idea. By the second, I can think of thousands. So, the generalisation is leaky because of that word ‘small’. Instead of relaxing it, just tighten up the ‘small’ part.
That would certainly be a more reasonable position. (Except, obviously, where the payoffs were commensurably large. That obviously doesn’t happen often. Situations like “3 weeks to live, can’t afford cryonics are the only kind of exception that spring to mind.”)
We might be thinking of different generalizations here.
Almost certainly. I am specifically referring to the generalisation quoted by David. It is, in fact, exactly the reasoning I used when I donated to the SIAI. Specifically, I estimate the probability of me or even humanity surviving for the long term if we don’t pull off FAI to be vanishingly small (like that of winning the lottery by mistake without buying a ticket) so donated to support FAI research even though I think it to be, well, “impossible”.
More straightforward examples crop up all the time when playing games. Just last week I bid open misere when I had a 10% chance of winning—the alternatives of either passing or making a 9 call were guaranteed losses of the 500 game.
If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work.
Hmm. the “if humanity survives the next century” covers the uFAI possibility (where I suspect the bulk of the
probability is). I’m taking it as a given that successful cryonics is possibly in principle (no vitalism etc.).
Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical
matter of actually reviving patients, is likely to fail:
Technology may simply not be applied in that direction. The amount of specific research needed to
actually revive patients may exceed the funding available.
Technology as a whole may stop progressing. We’ve had a lot of success in the last few decades in
computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals -
and the lithography advances which have been driving computing look like they have maybe another
factor of two to go (unless we get atomically precise nanotechnology—which mostly hasn’t been funded)
Perhaps there is a version of “coming to terms with one’s mortality” which isn’t deathist, and isn’t
theological, and isn’t some vague displacements of one’s hopes on to later generations, but is
simply saying that hope of increasing one’s lifespan by additional efforts isn’t plausibly supported
by the evidence, and the tradeoff of what one could instead do with that effort.
One other thing that makes me skeptical about “cryonics should work by default”:
A large chuck of what makes powerful parts of our society value (at least some) human life is
their current inability to manufacture plug-compatible replacements for humans. Neither
governments nor corporations can currently build taxpayers or employees. If these structures
gained the ability to build human equivalents for the functions that they value, I’d expect that
policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped.
Successful revival of cryonics patients requires the ability to either repair or upload a frozen,
rather damaged, brain. Either of these capabilities strongly suggests the ability to construct
a healthy but blank brain or uploaded equivalent from scratch—but this is most of what is needed
to create a plug-compatible replacement for a person (albeit requiring training—one time anyway,
and then copying can be used...).
To put it another way: corporations and governments have capabilities beyond what individuals
have, and they aren’t known for using them humanely. They already are uFAIs, in a sense.
Fortunately, for now, they are built of humans as component parts, so they currently can’t
dispense with us. If technology progresses to the point of being able to manufacture human
equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal
consequences.
If “by default” includes keeping something like our current social structure, with structures like
corporations and governments present, I’d expect that for cryonics patients to be revived, our
society would have to hit a very narrow window of technological capability. It would have to be
capable of repairing or uploading frozen brains, but not capable of building plug-in human
equivalents. This looks inherently improbable, rather than what I’d consider a default scenario.
If you define success as “increased knowledge” instead of “new useful applications,” then the probability of success for doing scientific research is high (i.e. >75%).
You increase your knowledge every time you do an experiment. Just as you do every time you ask a question in Guess Who? At the very worst you discover that you asked a stupid question or that your opponent gives unreliable answers.
Reading through the context confirms that the relevant probability is p(increased knowledge). I have no specified position on whether the knowledge gained is sufficient to justify the expenditure of effort.
Er … isn’t that the argument for cryonics?
From four posts down:
That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.
While that shows the lottery is stupid, it doesn’t show that cryonics has made it into smart territory. Things are further complicated by the fact that your odds of winning the lottery are known, certain, and printed on the ticket- your odds of winning the cryonics lottery are fundamentally uncertain.
I disagree with ‘fundamentally’. It is no more uncertain than any future event; to call all future events fundamentally uncertain could be true on certain acceptable definitions of fundamental, but it’s hardly a useful word in those cases.
Medical research and testing has been done with cryonics; we have a good idea of exactly what kinds of damage occur during vitrification, and a middling idea of what would be required to fix it. IIRC cryonics institutions remaining in operation, the law not becoming hostile to cryonics, and possible civilization-damaging events (large-scale warfare, natural disasters, etc) are all bigger concerns than the medicine involved. All of these concerns can be quantified.
I am talking about the odds, and even if I were talking about the event, I feel pretty strongly that we can be more certain about things like the sun rising tomorrow than me winning the lottery next week. My odds of winning the with each ticket I buy are 1 in X plus/minus some factor for fraud/finding a winning ticket. That’s a pretty low range of odds. My odds for being revived after cryonics have a much wider range, since the events leading up to it are far more complicated than removing 6 balls from an urn. Hence, fundamental uncertainty because the fundamental aspects of the problem are different in a way that leads to less certainty.
Yes, they have a much wider range, but all mathematical treatments of that range that I’ve seen come out showing the lower limit to be at least a few orders of magnitude greater than the lottery. Even though we are uncertain about how likely cryonics is to work, we are certain it’s more likely than winning the lottery.
Unless you discover a way of gaming the lottery system.
Though that’s actually illegal, so you’d have to include the chance of getting caught.
No. If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work. Despite various what-ifs, ceteris paribus, adding carbon dioxide to the atmosphere would be expected to produce global warming and you would need specific evidence to contradict that. In the same way, ceteris paribus, vitrification at liquid nitrogen temperatures ought to preserve your brain and preserving your brain ought to preserve your you, and despite various what-ifs, you would need specific evidence to contradict that it because it is implied by the generalizations we already believe about the universe.
Everything you say after the ’No.” is true but doesn’t support your contradiction of:
There is no need to defend cryonics here. Just relax the generalisation. I’m surprised you ‘can’t think of a single case in your experience’ anyway. It took me 10 seconds to think of three in mine. Hardly surprising—such cases turn up whenever the payoffs multiply out right.
I think the kind of small probabilities Eliezer was talking about (not that he was specific) here is small in the sense that there is a small probability that evolution is wrong, there is a small probability that God exists, etc.
The other interpretation is something like there is a small probability you will hit your open-ended straight draw (31%). If there are at least two players other than you calling, though, it is always a good idea to call (excepting tournament and all-in considerations). So it depends on what interpretation you have of the word ‘small’.
By the first definition of small (vanishing), I can’t think of a single argument that was a good idea. By the second, I can think of thousands. So, the generalisation is leaky because of that word ‘small’. Instead of relaxing it, just tighten up the ‘small’ part.
Redefinition not supported by the context.
I already noted that Eliezer was not specific enough to support that redefinition. I was offering an alternate course of action for Eliezer to take.
That would certainly be a more reasonable position. (Except, obviously, where the payoffs were commensurably large. That obviously doesn’t happen often. Situations like “3 weeks to live, can’t afford cryonics are the only kind of exception that spring to mind.”)
Name one? We might be thinking of different generalizations here.
Almost certainly. I am specifically referring to the generalisation quoted by David. It is, in fact, exactly the reasoning I used when I donated to the SIAI. Specifically, I estimate the probability of me or even humanity surviving for the long term if we don’t pull off FAI to be vanishingly small (like that of winning the lottery by mistake without buying a ticket) so donated to support FAI research even though I think it to be, well, “impossible”.
More straightforward examples crop up all the time when playing games. Just last week I bid open misere when I had a 10% chance of winning—the alternatives of either passing or making a 9 call were guaranteed losses of the 500 game.
Hmm. the “if humanity survives the next century” covers the uFAI possibility (where I suspect the bulk of the probability is). I’m taking it as a given that successful cryonics is possibly in principle (no vitalism etc.). Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical matter of actually reviving patients, is likely to fail:
Technology may simply not be applied in that direction. The amount of specific research needed to actually revive patients may exceed the funding available.
Technology as a whole may stop progressing. We’ve had a lot of success in the last few decades in computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals - and the lithography advances which have been driving computing look like they have maybe another factor of two to go (unless we get atomically precise nanotechnology—which mostly hasn’t been funded)
Perhaps there is a version of “coming to terms with one’s mortality” which isn’t deathist, and isn’t theological, and isn’t some vague displacements of one’s hopes on to later generations, but is simply saying that hope of increasing one’s lifespan by additional efforts isn’t plausibly supported by the evidence, and the tradeoff of what one could instead do with that effort.
’scuse the self-follow-up...
One other thing that makes me skeptical about “cryonics should work by default”:
A large chuck of what makes powerful parts of our society value (at least some) human life is their current inability to manufacture plug-compatible replacements for humans. Neither governments nor corporations can currently build taxpayers or employees. If these structures gained the ability to build human equivalents for the functions that they value, I’d expect that policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped.
Successful revival of cryonics patients requires the ability to either repair or upload a frozen, rather damaged, brain. Either of these capabilities strongly suggests the ability to construct a healthy but blank brain or uploaded equivalent from scratch—but this is most of what is needed to create a plug-compatible replacement for a person (albeit requiring training—one time anyway, and then copying can be used...).
To put it another way: corporations and governments have capabilities beyond what individuals have, and they aren’t known for using them humanely. They already are uFAIs, in a sense. Fortunately, for now, they are built of humans as component parts, so they currently can’t dispense with us. If technology progresses to the point of being able to manufacture human equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal consequences.
If “by default” includes keeping something like our current social structure, with structures like corporations and governments present, I’d expect that for cryonics patients to be revived, our society would have to hit a very narrow window of technological capability. It would have to be capable of repairing or uploading frozen brains, but not capable of building plug-in human equivalents. This looks inherently improbable, rather than what I’d consider a default scenario.
And scientific research!
If you define success as “increased knowledge” instead of “new useful applications,” then the probability of success for doing scientific research is high (i.e. >75%).
For individual experiments, it is often low, depending on the field.
You increase your knowledge every time you do an experiment. Just as you do every time you ask a question in Guess Who? At the very worst you discover that you asked a stupid question or that your opponent gives unreliable answers.
The relevant probability is p(benefits>costs) not p(benifits>0).
Reading through the context confirms that the relevant probability is p(increased knowledge). I have no specified position on whether the knowledge gained is sufficient to justify the expenditure of effort.
Indeed. I forgot. Oops.
Often it clearly isn’t; so don’t do that sort of research.
Don’t spend $200 million trying to determine if there are a prime number of green rocks in Texas.