Isn’t “bomb all sufficiently advanced semiconductor fabs” an example of a pivotal act that the US government could do right now, without any AGI at all?
If current hardware is sufficient for AGI than maybe that doesn’t make us safe, but plausibly current hardware is not sufficient for AGI, and either way stopping hardware progress would slow AI timelines a lot.
Isn’t “bomb all sufficiently advanced semiconductor fabs” an example of a pivotal act that the US government could do right now, without any AGI at all?
Sort of. As stated earlier, I’m now relatively optimistic about non-AI-empowered pivotal acts.
There are two big questions.
First: is “is that an accessible pivotal act?”. What needs to be different such that the US government would actually do that? How would it maintain legitimacy and the ability to continue bombing fabs afterwards? Would all ‘peer powers’ agree to this, or have you just started WWIII at tremendous human cost? Have you just driven this activity underground, or has it actually stopped?
Second: “does that make the situation better or worse?”. In the sci-fi universe of Dune, humanity outlaws all computers for AI risk reasons, and nevertheless makes it to the stars… aided in large part by unexplained magical powers. If we outlaw all strong computers in our universe without magical powers, will we make it to the stars, or be able to protect our planet from asteroids and comets, or be able to cure aging, or be able to figure out how to align AIs?
I think probably if we stayed at, like, 2010s level of hardware we’d be fine and able to protect our planet from asteroids or w/e, and maybe it’ll be fine at 2020s levels or 2030s levels or w/e (tho obv more seems more risky). So I think there are lots of ‘slow down hardware progress’ options that do actually make the situation better, and so think people should put effort into trying to accomplish this legitimately, but I’m pretty confused about what to do in situations where we don’t have a plan of how to turn low-hardware years into more alignment progress.
According to a bunch of people, it will be easier to make progress on alignment when we have more AI capabilities, which seems right to me. Also empirically it seems like the more AI can do, the more people think it’s fine to worry about AI, which also seems like a sad constraint that we should operate around. I think it’ll also be easier to do dangerous things with more AI capabilities and so the net effect is probably bad, but I’m open to arguments of the form “actually you needed transformers to exist in order for your interpretability work to be at all pointed in the right direction” which suggest we do need to go a bit further before stopping in order to do well at alignment. But, like, let’s actually hear those arguments in both directions.
I don’t think “burn all GPUs” fares better on any of these questions. I guess you could imagine it being more “accessible” if you think building aligned AGI is easier than convincing the US government AI risk is truly an existential threat (seems implausible).
“Accessibility” seems to illustrate the extent to which AI risk can be seen as a social rather than technical problem; if a small number of decision-makers in the US and Chinese governments (and perhaps some semiconductor companies and software companies) were really convinced AI risk was a concern, they could negotiate to slow hardware progress. But the arguments are not convincing (including to me), so they don’t.
In practice, negotiation and regulation (I guess somewhat similar to nuclear non-proliferation) would be a lot better than “literally bomb fabs”. I don’t think being driven underground is a realistic concern—cutting-edge fabs are very expensive.
Isn’t “bomb all sufficiently advanced semiconductor fabs” an example of a pivotal act that the US government could do right now, without any AGI at all?
If current hardware is sufficient for AGI than maybe that doesn’t make us safe, but plausibly current hardware is not sufficient for AGI, and either way stopping hardware progress would slow AI timelines a lot.
Sort of. As stated earlier, I’m now relatively optimistic about non-AI-empowered pivotal acts.
There are two big questions.
First: is “is that an accessible pivotal act?”. What needs to be different such that the US government would actually do that? How would it maintain legitimacy and the ability to continue bombing fabs afterwards? Would all ‘peer powers’ agree to this, or have you just started WWIII at tremendous human cost? Have you just driven this activity underground, or has it actually stopped?
Second: “does that make the situation better or worse?”. In the sci-fi universe of Dune, humanity outlaws all computers for AI risk reasons, and nevertheless makes it to the stars… aided in large part by unexplained magical powers. If we outlaw all strong computers in our universe without magical powers, will we make it to the stars, or be able to protect our planet from asteroids and comets, or be able to cure aging, or be able to figure out how to align AIs?
I think probably if we stayed at, like, 2010s level of hardware we’d be fine and able to protect our planet from asteroids or w/e, and maybe it’ll be fine at 2020s levels or 2030s levels or w/e (tho obv more seems more risky). So I think there are lots of ‘slow down hardware progress’ options that do actually make the situation better, and so think people should put effort into trying to accomplish this legitimately, but I’m pretty confused about what to do in situations where we don’t have a plan of how to turn low-hardware years into more alignment progress.
According to a bunch of people, it will be easier to make progress on alignment when we have more AI capabilities, which seems right to me. Also empirically it seems like the more AI can do, the more people think it’s fine to worry about AI, which also seems like a sad constraint that we should operate around. I think it’ll also be easier to do dangerous things with more AI capabilities and so the net effect is probably bad, but I’m open to arguments of the form “actually you needed transformers to exist in order for your interpretability work to be at all pointed in the right direction” which suggest we do need to go a bit further before stopping in order to do well at alignment. But, like, let’s actually hear those arguments in both directions.
I don’t think “burn all GPUs” fares better on any of these questions. I guess you could imagine it being more “accessible” if you think building aligned AGI is easier than convincing the US government AI risk is truly an existential threat (seems implausible).
“Accessibility” seems to illustrate the extent to which AI risk can be seen as a social rather than technical problem; if a small number of decision-makers in the US and Chinese governments (and perhaps some semiconductor companies and software companies) were really convinced AI risk was a concern, they could negotiate to slow hardware progress. But the arguments are not convincing (including to me), so they don’t.
In practice, negotiation and regulation (I guess somewhat similar to nuclear non-proliferation) would be a lot better than “literally bomb fabs”. I don’t think being driven underground is a realistic concern—cutting-edge fabs are very expensive.