Yes to all of the first paragraph. A caveat is that there’s a big difference between humans remaining nominally in charge of an AGI-driven economy and not. If we’re still technically in charge, we will retire (however many of us those in charge care to support; hopefull eventually quadrillions or so); if not, we’ll be either entirely extinct or have a few of us maintained for historical interest by the new AGI overlords.
I see no way to meaningfully pause AI in time. We could possibly pause US progress with adequate fearmongering, but that would just make China get there first. That could be a good thing if they’re more cautious, which it now seems they might very well be. That would be only if Xi or whoever winds up in charge is not a sociopath. Which I have no idea about.
Pausing for decades requires an international treaty powerful enough to keep advanced semiconductor manufacturing from getting into the hands of a faction that would defect on the pause. But it’s already very distributed, one hears a lot about ASML, but the tools it produces are not the only crucial thing, other similarly crucial tools are exclusively manufactured in many other countries. So starting this process quickly shouldn’t be too difficult from the technical side, the issue is deciding to actually do it and then sustaining it even as individual nations get enough time to catch up with all the details that go into semiconductor manufacturing (which could take actual decades). And this doesn’t seem different in kind from controlling the means of manufacturing nuclear arms.
This doesn’t work if the AI accelerators already in the wild (in quantities a single actor could amass) are sufficient for an AGI capable of fast autonomous unbounded research (designed through merely human effort), but this could plausibly go either way. And it requires any new AI accelerators to be built differently, so that it’s not sufficient to physically obtain them in order to run arbitrary computations on them. This way, there isn’t temptation to seize such accelerators by force, and so no need to worry about enforcing the pause at the level of physical datacenters.
Yes, the issue is deciding to actually do it. That might happen if you just needed the US and China. But I see no way that the signatories wouldn’t defect even after they’d signed the treaty saying they wouldn’t do it.
I have no expertise in hardware security but I’d be shocked if there was a way to prevent unauthorized use even with physical possession in technically skilled (nation-state level) hands.
The final problem is that we probably already have plenty of compute to create AGI once some more algorithmic improvements are discovered. Tracked sincce 2013, alogirithmic improvements have been roughly as fast for neural networks as hardware improvements, depending on how you do the math. Sorry I don’t have the reference. In any case, algorithmic improvements are real and large, so hardware limitations alone won’t suffice for that long. Human brain computational capacity is neither an upper nor lower bound on computation needed to reach superhuman digital intelligence.
If you get certificate checking inside each GPU, and somehow make it have a persistent counter state (doesn’t have to be a clock, just advance when the GPU operates) that can’t be reset, then you can issue one-time certificates for the specific GPU for the specific range of states of its internal counter with asymmetric encryption, which can’t be forged by examining the GPU. Most plausible ways around would be replay attacks that reuse old certificates while fooling the GPU into thinking it’s in the past. But given how many transistors modern GPUs have, it should be possible to physically distribute the logic that implements certificate checking and the counter states, and make it redundant, so that sufficient tempering would become infeasible, at least at scale (for millions of GPUs).
Algorithmic advancements, where it makes sense to talk of them as quantitative, are not that significant. Transformer made scaling to modern levels possible at all, and there was maybe a 10x improvement in compute efficiency since then (Llama+MoE), most (not all) ingredients relevant to compute efficiency in particular were already there in 2017 and just didn’t make it into the initial recipe. If there is a pause, there should be no advancement in fabrication process, instead the technical difficulty of advanced semiconductor manufacturing becomes the main lever of enforcement. More qualitative advancements like hypothetical scalable self-play for LLMs are different, but then if there is a few years to phase out unrestricted GPUs, there is less unaccounted-for compute for experiments and eventual scaling.
Yes to all of the first paragraph. A caveat is that there’s a big difference between humans remaining nominally in charge of an AGI-driven economy and not. If we’re still technically in charge, we will retire (however many of us those in charge care to support; hopefull eventually quadrillions or so); if not, we’ll be either entirely extinct or have a few of us maintained for historical interest by the new AGI overlords.
I see no way to meaningfully pause AI in time. We could possibly pause US progress with adequate fearmongering, but that would just make China get there first. That could be a good thing if they’re more cautious, which it now seems they might very well be. That would be only if Xi or whoever winds up in charge is not a sociopath. Which I have no idea about.
Pausing for decades requires an international treaty powerful enough to keep advanced semiconductor manufacturing from getting into the hands of a faction that would defect on the pause. But it’s already very distributed, one hears a lot about ASML, but the tools it produces are not the only crucial thing, other similarly crucial tools are exclusively manufactured in many other countries. So starting this process quickly shouldn’t be too difficult from the technical side, the issue is deciding to actually do it and then sustaining it even as individual nations get enough time to catch up with all the details that go into semiconductor manufacturing (which could take actual decades). And this doesn’t seem different in kind from controlling the means of manufacturing nuclear arms.
This doesn’t work if the AI accelerators already in the wild (in quantities a single actor could amass) are sufficient for an AGI capable of fast autonomous unbounded research (designed through merely human effort), but this could plausibly go either way. And it requires any new AI accelerators to be built differently, so that it’s not sufficient to physically obtain them in order to run arbitrary computations on them. This way, there isn’t temptation to seize such accelerators by force, and so no need to worry about enforcing the pause at the level of physical datacenters.
Yes, the issue is deciding to actually do it. That might happen if you just needed the US and China. But I see no way that the signatories wouldn’t defect even after they’d signed the treaty saying they wouldn’t do it.
I have no expertise in hardware security but I’d be shocked if there was a way to prevent unauthorized use even with physical possession in technically skilled (nation-state level) hands.
The final problem is that we probably already have plenty of compute to create AGI once some more algorithmic improvements are discovered. Tracked sincce 2013, alogirithmic improvements have been roughly as fast for neural networks as hardware improvements, depending on how you do the math. Sorry I don’t have the reference. In any case, algorithmic improvements are real and large, so hardware limitations alone won’t suffice for that long. Human brain computational capacity is neither an upper nor lower bound on computation needed to reach superhuman digital intelligence.
If you get certificate checking inside each GPU, and somehow make it have a persistent counter state (doesn’t have to be a clock, just advance when the GPU operates) that can’t be reset, then you can issue one-time certificates for the specific GPU for the specific range of states of its internal counter with asymmetric encryption, which can’t be forged by examining the GPU. Most plausible ways around would be replay attacks that reuse old certificates while fooling the GPU into thinking it’s in the past. But given how many transistors modern GPUs have, it should be possible to physically distribute the logic that implements certificate checking and the counter states, and make it redundant, so that sufficient tempering would become infeasible, at least at scale (for millions of GPUs).
Algorithmic advancements, where it makes sense to talk of them as quantitative, are not that significant. Transformer made scaling to modern levels possible at all, and there was maybe a 10x improvement in compute efficiency since then (Llama+MoE), most (not all) ingredients relevant to compute efficiency in particular were already there in 2017 and just didn’t make it into the initial recipe. If there is a pause, there should be no advancement in fabrication process, instead the technical difficulty of advanced semiconductor manufacturing becomes the main lever of enforcement. More qualitative advancements like hypothetical scalable self-play for LLMs are different, but then if there is a few years to phase out unrestricted GPUs, there is less unaccounted-for compute for experiments and eventual scaling.