The timelines certainly still looked short enough a couple of months ago. But what prompted me to write this was the 13th observation: the seemingly snowballing Pause movement, which, once it reaches certain threshold, has a potential to significantly stifle the development of AI. Analogies: human genetic enhancement, nuclear energy. I’m not sure whether this is already past the point of countering the opposite forces (useful applications, Moore’s law), but I’m also not sure that it isn’t (or won’t be soon).
Cryonics is a very speculative tech. We don’t understand how much information is lost in the process, scientific evidence seems lacking overall—consensus being it’s in the ~few percent success probability region, future AI (future society) would have to want to revive humans instead of creating new ones, etc.
> consensus being it’s in the ~few percent success probability region
Consensus among who? I haven’t been able to find a class of experts I’d defer to. We have Alcor, who are too partial, we have the society of cryobiology who openly refuse to learn anything about the process and threaten to exile any member who does, and I have random members of the rationalist community who have no obligation to be right and just want to sound grounded.
In order for a pause to work it has to happen everywhere. Nuclear power is widely deployed in e.g. France, so you need a stronger political force than the one causing nuclear power not to proliferate.
AI is also more like the “keys to the kingdom” here.
The benefit of nuclear power isn’t that huge, you still have fossil fuels which are cheap (even if they cause climate change in the long run).
The benefits of genetic editing/eugenics is also pretty nebulous and may take decades to realize.
On the other hand, one country having an aligned ASI offers an overwhelming advantage. World dominance goes from fiction to mundane reality. I think these sort of treaties also advertise this fact so I think it’s likely they won’t work. All governments are probably seriously considering what I mentioned above.
Why is the U.S. blocking Chinas access to GPUs? That seems like the most plausible explanation.
The timelines certainly still looked short enough a couple of months ago. But what prompted me to write this was the 13th observation: the seemingly snowballing Pause movement, which, once it reaches certain threshold, has a potential to significantly stifle the development of AI. Analogies: human genetic enhancement, nuclear energy. I’m not sure whether this is already past the point of countering the opposite forces (useful applications, Moore’s law), but I’m also not sure that it isn’t (or won’t be soon).
Cryonics is a very speculative tech. We don’t understand how much information is lost in the process, scientific evidence seems lacking overall—consensus being it’s in the ~few percent success probability region, future AI (future society) would have to want to revive humans instead of creating new ones, etc.
> consensus being it’s in the ~few percent success probability region
Consensus among who? I haven’t been able to find a class of experts I’d defer to. We have Alcor, who are too partial, we have the society of cryobiology who openly refuse to learn anything about the process and threaten to exile any member who does, and I have random members of the rationalist community who have no obligation to be right and just want to sound grounded.
In order for a pause to work it has to happen everywhere. Nuclear power is widely deployed in e.g. France, so you need a stronger political force than the one causing nuclear power not to proliferate.
AI is also more like the “keys to the kingdom” here.
The benefit of nuclear power isn’t that huge, you still have fossil fuels which are cheap (even if they cause climate change in the long run).
The benefits of genetic editing/eugenics is also pretty nebulous and may take decades to realize.
On the other hand, one country having an aligned ASI offers an overwhelming advantage. World dominance goes from fiction to mundane reality. I think these sort of treaties also advertise this fact so I think it’s likely they won’t work. All governments are probably seriously considering what I mentioned above.
Why is the U.S. blocking Chinas access to GPUs? That seems like the most plausible explanation.
High levels GPUs are needed for basically anything mundane today. No need to bring in AGI worries to make it a strategic ressource.
I think the timing and perf focus of it all makes it clear it’s related to foundation models.