At this point timelines look short enough that you likely increase your personal odds of survival more by increasing the chance that AI goes well than by speeding up timelines. Also I don’t see why you think cryonics doesn’t make sense as alternative option.
I wish I could convince my grandpa to sign up for cryonics, but he’s a 95 yo Indian doctor in India, where facilities for cryopreservation only extends to organs and eggs, so it’s moot regardless of the fact that I can’t convince him.
I expect my parents to survive to the Singularity, whether or not it kills us in the process. Same for me, and given my limited income, I’m not spending it on cryonics given that a hostile AGI will kill even the ones frozen away.
The timelines certainly still looked short enough a couple of months ago. But what prompted me to write this was the 13th observation: the seemingly snowballing Pause movement, which, once it reaches certain threshold, has a potential to significantly stifle the development of AI. Analogies: human genetic enhancement, nuclear energy. I’m not sure whether this is already past the point of countering the opposite forces (useful applications, Moore’s law), but I’m also not sure that it isn’t (or won’t be soon).
Cryonics is a very speculative tech. We don’t understand how much information is lost in the process, scientific evidence seems lacking overall—consensus being it’s in the ~few percent success probability region, future AI (future society) would have to want to revive humans instead of creating new ones, etc.
> consensus being it’s in the ~few percent success probability region
Consensus among who? I haven’t been able to find a class of experts I’d defer to. We have Alcor, who are too partial, we have the society of cryobiology who openly refuse to learn anything about the process and threaten to exile any member who does, and I have random members of the rationalist community who have no obligation to be right and just want to sound grounded.
In order for a pause to work it has to happen everywhere. Nuclear power is widely deployed in e.g. France, so you need a stronger political force than the one causing nuclear power not to proliferate.
AI is also more like the “keys to the kingdom” here.
The benefit of nuclear power isn’t that huge, you still have fossil fuels which are cheap (even if they cause climate change in the long run).
The benefits of genetic editing/eugenics is also pretty nebulous and may take decades to realize.
On the other hand, one country having an aligned ASI offers an overwhelming advantage. World dominance goes from fiction to mundane reality. I think these sort of treaties also advertise this fact so I think it’s likely they won’t work. All governments are probably seriously considering what I mentioned above.
Why is the U.S. blocking Chinas access to GPUs? That seems like the most plausible explanation.
If you think cryonics has a very high likelihood of working then sure. I don’t think the arguments that cryonics is likely to work are that good though. I don’t think Eliezer has even made arguments to that effect. They were mostly “hey doing cryonics is better than not doing it because not doing cryonics is just death!”
I think Eliezer is pretty confident that cryonics will work. For myself, I’m not sure, haven’t really looked into it that deeply, but the a priori argument makes sense and I feel like this is the kind of thing people would be irrationally biased against due to its speculative nature(similar to the AGI skepticism that many had until recently) so I’d give it decent odds.
Also I don’t see why you think cryonics doesn’t make sense as alternative option.
I was responding to this point. The “cryonics is better than nothing” argument doesn’t make cryonics an alternative option to immortality by friendly AI. If Bob thinks cryonics has a 10% chance of making him immortal and thinks AI will have a 20% chance of making him immortal and an 80% chance of destroying the world, then the superhuman AI route is more likely to lead to Bob’s immortality than cryonics.
I didn’t say that “cryonics is better than nothing”, I said I think it has decent odds of success. To spell it out, I think the success probability is higher than the increased probability of friendly AI in my lifetime from acceleration(which is the relevant comparison) while imposing fewer costs on future generations. And I think that if you made it your life’s work, you could probably improve those odds, up to 80% perhaps(conditional on future people wanting to revive you)
At this point timelines look short enough that you likely increase your personal odds of survival more by increasing the chance that AI goes well than by speeding up timelines. Also I don’t see why you think cryonics doesn’t make sense as alternative option.
Cryonics is likely a very tough sell to the “close ones”.
I wish I could convince my grandpa to sign up for cryonics, but he’s a 95 yo Indian doctor in India, where facilities for cryopreservation only extends to organs and eggs, so it’s moot regardless of the fact that I can’t convince him.
I expect my parents to survive to the Singularity, whether or not it kills us in the process. Same for me, and given my limited income, I’m not spending it on cryonics given that a hostile AGI will kill even the ones frozen away.
You probably also irreversibly lose a ton of information with cryonics.
The timelines certainly still looked short enough a couple of months ago. But what prompted me to write this was the 13th observation: the seemingly snowballing Pause movement, which, once it reaches certain threshold, has a potential to significantly stifle the development of AI. Analogies: human genetic enhancement, nuclear energy. I’m not sure whether this is already past the point of countering the opposite forces (useful applications, Moore’s law), but I’m also not sure that it isn’t (or won’t be soon).
Cryonics is a very speculative tech. We don’t understand how much information is lost in the process, scientific evidence seems lacking overall—consensus being it’s in the ~few percent success probability region, future AI (future society) would have to want to revive humans instead of creating new ones, etc.
> consensus being it’s in the ~few percent success probability region
Consensus among who? I haven’t been able to find a class of experts I’d defer to. We have Alcor, who are too partial, we have the society of cryobiology who openly refuse to learn anything about the process and threaten to exile any member who does, and I have random members of the rationalist community who have no obligation to be right and just want to sound grounded.
In order for a pause to work it has to happen everywhere. Nuclear power is widely deployed in e.g. France, so you need a stronger political force than the one causing nuclear power not to proliferate.
AI is also more like the “keys to the kingdom” here.
The benefit of nuclear power isn’t that huge, you still have fossil fuels which are cheap (even if they cause climate change in the long run).
The benefits of genetic editing/eugenics is also pretty nebulous and may take decades to realize.
On the other hand, one country having an aligned ASI offers an overwhelming advantage. World dominance goes from fiction to mundane reality. I think these sort of treaties also advertise this fact so I think it’s likely they won’t work. All governments are probably seriously considering what I mentioned above.
Why is the U.S. blocking Chinas access to GPUs? That seems like the most plausible explanation.
High levels GPUs are needed for basically anything mundane today. No need to bring in AGI worries to make it a strategic ressource.
I think the timing and perf focus of it all makes it clear it’s related to foundation models.
If you think cryonics has a very high likelihood of working then sure. I don’t think the arguments that cryonics is likely to work are that good though. I don’t think Eliezer has even made arguments to that effect. They were mostly “hey doing cryonics is better than not doing it because not doing cryonics is just death!”
I think Eliezer is pretty confident that cryonics will work. For myself, I’m not sure, haven’t really looked into it that deeply, but the a priori argument makes sense and I feel like this is the kind of thing people would be irrationally biased against due to its speculative nature(similar to the AGI skepticism that many had until recently) so I’d give it decent odds.
I was responding to this point. The “cryonics is better than nothing” argument doesn’t make cryonics an alternative option to immortality by friendly AI. If Bob thinks cryonics has a 10% chance of making him immortal and thinks AI will have a 20% chance of making him immortal and an 80% chance of destroying the world, then the superhuman AI route is more likely to lead to Bob’s immortality than cryonics.
I didn’t say that “cryonics is better than nothing”, I said I think it has decent odds of success. To spell it out, I think the success probability is higher than the increased probability of friendly AI in my lifetime from acceleration(which is the relevant comparison) while imposing fewer costs on future generations. And I think that if you made it your life’s work, you could probably improve those odds, up to 80% perhaps(conditional on future people wanting to revive you)