If you think cryonics has a very high likelihood of working then sure. I don’t think the arguments that cryonics is likely to work are that good though. I don’t think Eliezer has even made arguments to that effect. They were mostly “hey doing cryonics is better than not doing it because not doing cryonics is just death!”
I think Eliezer is pretty confident that cryonics will work. For myself, I’m not sure, haven’t really looked into it that deeply, but the a priori argument makes sense and I feel like this is the kind of thing people would be irrationally biased against due to its speculative nature(similar to the AGI skepticism that many had until recently) so I’d give it decent odds.
Also I don’t see why you think cryonics doesn’t make sense as alternative option.
I was responding to this point. The “cryonics is better than nothing” argument doesn’t make cryonics an alternative option to immortality by friendly AI. If Bob thinks cryonics has a 10% chance of making him immortal and thinks AI will have a 20% chance of making him immortal and an 80% chance of destroying the world, then the superhuman AI route is more likely to lead to Bob’s immortality than cryonics.
I didn’t say that “cryonics is better than nothing”, I said I think it has decent odds of success. To spell it out, I think the success probability is higher than the increased probability of friendly AI in my lifetime from acceleration(which is the relevant comparison) while imposing fewer costs on future generations. And I think that if you made it your life’s work, you could probably improve those odds, up to 80% perhaps(conditional on future people wanting to revive you)
If you think cryonics has a very high likelihood of working then sure. I don’t think the arguments that cryonics is likely to work are that good though. I don’t think Eliezer has even made arguments to that effect. They were mostly “hey doing cryonics is better than not doing it because not doing cryonics is just death!”
I think Eliezer is pretty confident that cryonics will work. For myself, I’m not sure, haven’t really looked into it that deeply, but the a priori argument makes sense and I feel like this is the kind of thing people would be irrationally biased against due to its speculative nature(similar to the AGI skepticism that many had until recently) so I’d give it decent odds.
I was responding to this point. The “cryonics is better than nothing” argument doesn’t make cryonics an alternative option to immortality by friendly AI. If Bob thinks cryonics has a 10% chance of making him immortal and thinks AI will have a 20% chance of making him immortal and an 80% chance of destroying the world, then the superhuman AI route is more likely to lead to Bob’s immortality than cryonics.
I didn’t say that “cryonics is better than nothing”, I said I think it has decent odds of success. To spell it out, I think the success probability is higher than the increased probability of friendly AI in my lifetime from acceleration(which is the relevant comparison) while imposing fewer costs on future generations. And I think that if you made it your life’s work, you could probably improve those odds, up to 80% perhaps(conditional on future people wanting to revive you)