It feels like it is often assumed that the best way to prevent AGI ruin is through AGI alignment, but this isn’t obvious to me. Do you think that we need to use AGI to prevent AGI ruin?
Here’s a proposal (there are almost certainly better ones): Because of the large amount of compute required to create AGI, governments creating strict regulation to prevent AGI from being created. Of course, the compute to create AGI probably goes down every year, but this buys lots of time, during which one might be able to enact more careful AI regulation, or pull off state-sponsored AGI powered pivotal act project.
It seems very unlikely that one AI organization will be years ahead of everyone else on the road to AGI, so one of the main policy challenges is to make sure that all of the organizations that could deploy an AGI and cause ruin somehow decide to not do this. The challenge of making all of these organizations not deploy AGI seems easier to pull off than trying to prevent AGI via government regulations, but potentially not by that much, and the benefit of not having to solve alignment seems very large.
The key downside of this path that I see is that it strictly cuts off the path of using the AGI to perform the pivotal act, because government regulation would prevent that AGI from being built. And this government prevented AGI means that we are still on the precipice—later AGI ruin might happen that is more difficult to prevent, or another x-risk could happen. But it’s not clear to me which path gives a higher likelihood of success.
Because of the large amount of compute required to create AGI, governments creating strict regulation to prevent AGI from being created. Of course, the compute to create AGI probably goes down every year, but this buys lots of time
Military-backed hackers can effortlessly get access/hijack compute elsewhere, which means that state-backed AI development is not going to be constrained by regulation at all, at least not anything like that. This is one of the big reasons why EY has made high-profile statements about the concept of eliminating all compute, even though that concept is considered heretical by all the decisionmakers in the AI domain.
It’s also the only reason why people talk about “slowing down AI progress” through sweeping, stifling industry regulations, instead of banning specific kinds of AI, although that is actually even more heretical; because it could conceivably happen in English-speaking countries without an agreement that successfully sets up enduring regulation in Russia and China. Trust problems in the international area is already astronomically complex by default, because there are large numbers of agents (e.g. spies) and they inherently strive for maximum nontransparency and information asymmetry.
They rely on secrecy to gain relative advantages, but absolutely speaking, openness increases research speed; it increases the amount of technical information available to every actor.
It feels like it is often assumed that the best way to prevent AGI ruin is through AGI alignment, but this isn’t obvious to me. Do you think that we need to use AGI to prevent AGI ruin?
Here’s a proposal (there are almost certainly better ones): Because of the large amount of compute required to create AGI, governments creating strict regulation to prevent AGI from being created. Of course, the compute to create AGI probably goes down every year, but this buys lots of time, during which one might be able to enact more careful AI regulation, or pull off state-sponsored AGI powered pivotal act project.
It seems very unlikely that one AI organization will be years ahead of everyone else on the road to AGI, so one of the main policy challenges is to make sure that all of the organizations that could deploy an AGI and cause ruin somehow decide to not do this. The challenge of making all of these organizations not deploy AGI seems easier to pull off than trying to prevent AGI via government regulations, but potentially not by that much, and the benefit of not having to solve alignment seems very large.
The key downside of this path that I see is that it strictly cuts off the path of using the AGI to perform the pivotal act, because government regulation would prevent that AGI from being built. And this government prevented AGI means that we are still on the precipice—later AGI ruin might happen that is more difficult to prevent, or another x-risk could happen. But it’s not clear to me which path gives a higher likelihood of success.
Military-backed hackers can effortlessly get access/hijack compute elsewhere, which means that state-backed AI development is not going to be constrained by regulation at all, at least not anything like that. This is one of the big reasons why EY has made high-profile statements about the concept of eliminating all compute, even though that concept is considered heretical by all the decisionmakers in the AI domain.
It’s also the only reason why people talk about “slowing down AI progress” through sweeping, stifling industry regulations, instead of banning specific kinds of AI, although that is actually even more heretical; because it could conceivably happen in English-speaking countries without an agreement that successfully sets up enduring regulation in Russia and China. Trust problems in the international area is already astronomically complex by default, because there are large numbers of agents (e.g. spies) and they inherently strive for maximum nontransparency and information asymmetry.
Politically, it would be easier to enact a policy requiring complete openness about all research, rather than to ban it.
Such a policy would have the side effect of also slowing research progress, since corporations and governments rely on secrecy to gain advantages.
They rely on secrecy to gain relative advantages, but absolutely speaking, openness increases research speed; it increases the amount of technical information available to every actor.