While I agree with the overall sentiment, I think the most important claims within this post are poorly supported and factually false.
The first step toward AI alignment is to stop pushing the capability frontier, immediately. Then we might have enough time to find a way to design AIs that are aligned, or determine that this problem is too hard for use to solve in a suitably reliable way.
Whether or not any particular military equipment has more computing power than last year is of essentially zero relevance to AI alignment and outcomes from misaligned AI. It’s not like a superintelligence that would otherwise kill everyone will be stopped because some military hardware doesn’t have target recognition built in (or whatever other primitive algorithms humans happen to write).
I’m against humans developing most weapons - ‘intelligent’ or not—for human reasons, not because I think they would make future superintelligent entities more dangerous. Superintelligence is inherently dangerous since it can almost certainly devise strategies (which may not even involve weapons as we think of them) against which we have no hope of conceiving a defence in time for it to matter.
Superintelligence is inherently dangerous, yes. The rapid increase in capabilities is inherently destabilizing, yes. However, practically speaking, we humans can handle and learn from failure, provided it is not catastrophic. An unexpected superintelligence would be catastrophic. However, it will be hard to convince people to abandon currently benign AI models on the principle that they could spontaneously create a superintelligence. A more feasible approach would start with the most dangerous and misanthropic manifestations of AI: those that are specialized to kill humans.
While I agree with the overall sentiment, I think the most important claims within this post are poorly supported and factually false.
The first step toward AI alignment is to stop pushing the capability frontier, immediately. Then we might have enough time to find a way to design AIs that are aligned, or determine that this problem is too hard for use to solve in a suitably reliable way.
Whether or not any particular military equipment has more computing power than last year is of essentially zero relevance to AI alignment and outcomes from misaligned AI. It’s not like a superintelligence that would otherwise kill everyone will be stopped because some military hardware doesn’t have target recognition built in (or whatever other primitive algorithms humans happen to write).
I’m against humans developing most weapons - ‘intelligent’ or not—for human reasons, not because I think they would make future superintelligent entities more dangerous. Superintelligence is inherently dangerous since it can almost certainly devise strategies (which may not even involve weapons as we think of them) against which we have no hope of conceiving a defence in time for it to matter.
Superintelligence is inherently dangerous, yes. The rapid increase in capabilities is inherently destabilizing, yes. However, practically speaking, we humans can handle and learn from failure, provided it is not catastrophic. An unexpected superintelligence would be catastrophic. However, it will be hard to convince people to abandon currently benign AI models on the principle that they could spontaneously create a superintelligence. A more feasible approach would start with the most dangerous and misanthropic manifestations of AI: those that are specialized to kill humans.