However, there is no collective “we” to whom this message can be effectively directed. The readers of LW are not the ones who can influence the overarching policies of the US and China. That said, leaders at OpenAI and Anthropic might come across this.
This leads to the question of how to halt AI development on a global scale. Several propositions have been put forth:
1. A worldwide political agreement. Given the current state of wars and conflicts, this seems improbable. 2. A global nuclear war. As the likelihood of a political agreement diminishes, the probability of war increases. 3. Employing the first AI to establish a global control system that hinders the development of subsequent AIs. However, if this AI possesses superintelligence, the associated risks resurface. Therefore, this global control AI should not be superintelligent. It could be a human upload or a data-driven AI (as opposed to one that’s intelligence-augmented), like a surveillance system with constrained cognition. 4. Relying on extraterrestrial beings, UFOs, simulation theories, or anthropic principle for assistance. For instance, the reverse doomsday argument suggests that it’s improbable for the end to be imminent.
AI Nanny can be built in the ways which excludes this, like a combination of narrow neural nets capable to detect certain types of activity. Not AGI or advance LLM.
Agreed.
However, there is no collective “we” to whom this message can be effectively directed. The readers of LW are not the ones who can influence the overarching policies of the US and China. That said, leaders at OpenAI and Anthropic might come across this.
This leads to the question of how to halt AI development on a global scale. Several propositions have been put forth:
1. A worldwide political agreement. Given the current state of wars and conflicts, this seems improbable.
2. A global nuclear war. As the likelihood of a political agreement diminishes, the probability of war increases.
3. Employing the first AI to establish a global control system that hinders the development of subsequent AIs. However, if this AI possesses superintelligence, the associated risks resurface. Therefore, this global control AI should not be superintelligent. It could be a human upload or a data-driven AI (as opposed to one that’s intelligence-augmented), like a surveillance system with constrained cognition.
4. Relying on extraterrestrial beings, UFOs, simulation theories, or anthropic principle for assistance. For instance, the reverse doomsday argument suggests that it’s improbable for the end to be imminent.
3. doesn’t seem like a viable option, since there’s a decent chance it can disguise itself into appearing as less than superintelligent.
AI Nanny can be built in the ways which excludes this, like a combination of narrow neural nets capable to detect certain types of activity. Not AGI or advance LLM.