The prevailing notion in AI safety circles is that a pivotal act—an action that decisively alters the trajectory of artificial intelligence development—requires superhuman AGI, which itself poses extreme risks. I challenge this assumption.
Consider a pivotal act like “disable all GPUs globally.” This could potentially be achieved through less advanced means, such as a sophisticated computer virus akin to Stuxnet. Such a virus could be designed to replicate widely and render GPUs inoperable, without possessing the capabilities to create more dangerous weapons like bioweapons.
I’ve observed a lack of discussion around these “easier” pivotal acts in the AI safety community. Given the possibility that AI alignment might prove intractable, shouldn’t we be exploring alternative strategies to prevent the emergence of superhuman AI?
I propose that this avenue deserves significantly more attention. If AI alignment is indeed unsolvable, a pivotal act to halt or significantly delay superhuman AI development could be our most crucial safeguard.
I’m curious to hear the community’s thoughts on this perspective. Are there compelling reasons why such approaches are not more prominently discussed in AI safety circles?
Pivotal Acts are easier than Alignment?
The prevailing notion in AI safety circles is that a pivotal act—an action that decisively alters the trajectory of artificial intelligence development—requires superhuman AGI, which itself poses extreme risks. I challenge this assumption.
Consider a pivotal act like “disable all GPUs globally.” This could potentially be achieved through less advanced means, such as a sophisticated computer virus akin to Stuxnet. Such a virus could be designed to replicate widely and render GPUs inoperable, without possessing the capabilities to create more dangerous weapons like bioweapons.
I’ve observed a lack of discussion around these “easier” pivotal acts in the AI safety community. Given the possibility that AI alignment might prove intractable, shouldn’t we be exploring alternative strategies to prevent the emergence of superhuman AI?
I propose that this avenue deserves significantly more attention. If AI alignment is indeed unsolvable, a pivotal act to halt or significantly delay superhuman AI development could be our most crucial safeguard.
I’m curious to hear the community’s thoughts on this perspective. Are there compelling reasons why such approaches are not more prominently discussed in AI safety circles?