I don’t think “burn all GPUs” fares better on any of these questions. I guess you could imagine it being more “accessible” if you think building aligned AGI is easier than convincing the US government AI risk is truly an existential threat (seems implausible).
“Accessibility” seems to illustrate the extent to which AI risk can be seen as a social rather than technical problem; if a small number of decision-makers in the US and Chinese governments (and perhaps some semiconductor companies and software companies) were really convinced AI risk was a concern, they could negotiate to slow hardware progress. But the arguments are not convincing (including to me), so they don’t.
In practice, negotiation and regulation (I guess somewhat similar to nuclear non-proliferation) would be a lot better than “literally bomb fabs”. I don’t think being driven underground is a realistic concern—cutting-edge fabs are very expensive.
I don’t think “burn all GPUs” fares better on any of these questions. I guess you could imagine it being more “accessible” if you think building aligned AGI is easier than convincing the US government AI risk is truly an existential threat (seems implausible).
“Accessibility” seems to illustrate the extent to which AI risk can be seen as a social rather than technical problem; if a small number of decision-makers in the US and Chinese governments (and perhaps some semiconductor companies and software companies) were really convinced AI risk was a concern, they could negotiate to slow hardware progress. But the arguments are not convincing (including to me), so they don’t.
In practice, negotiation and regulation (I guess somewhat similar to nuclear non-proliferation) would be a lot better than “literally bomb fabs”. I don’t think being driven underground is a realistic concern—cutting-edge fabs are very expensive.