Let’s define a weapon as any tool which could be used to mount an attack.
Why? That broadens the definition of “weapon” to mean literally any tool, technology, or tactic by which one person or organization can gain an advantage over another. It’s far broader than and connotationally very different from the implied definition of “weapon” given by “building intelligent machines that are designed to kill people” and the examples of “suicide drones”, “assassin drones” and “robot dogs with mounted guns”.
Redefining “weapon” in this way turns your argument into a motte-and-bailey, where you’re redefining a word that connotes direct physical harm (e.g. robots armed with guns, bombs, knives, etc) to mean any machine that can, on its own, gain some kind of resource advantage over humans. Most people would not, for example, consider a superior stock-trading algorithm to be a “weapon”, but your (re)definition, it would be.
It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional ‘weapons’. Less potent vectors, (those that are not commonly considered as weapons) such as a ‘stock trading algorithm’, are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements). Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient ‘weapons’ to eliminate the offending machines.
If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.
To repeat what I said above: even a total launch of all the nuclear weapons in the world will not be sufficient to ensure human extinction. However, AI driven social, economic, and environmental changes could ensure just that.
If an AI got hold of a few nuclear weapons and launched them, that would, in fact, probably be counterproductive from the AI’s perspective, because in the face of such a clear warning sign, humanity would probably unite and shut down AI research and unplug its GPU clusters.
Why? That broadens the definition of “weapon” to mean literally any tool, technology, or tactic by which one person or organization can gain an advantage over another. It’s far broader than and connotationally very different from the implied definition of “weapon” given by “building intelligent machines that are designed to kill people” and the examples of “suicide drones”, “assassin drones” and “robot dogs with mounted guns”.
Redefining “weapon” in this way turns your argument into a motte-and-bailey, where you’re redefining a word that connotes direct physical harm (e.g. robots armed with guns, bombs, knives, etc) to mean any machine that can, on its own, gain some kind of resource advantage over humans. Most people would not, for example, consider a superior stock-trading algorithm to be a “weapon”, but your (re)definition, it would be.
It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional ‘weapons’. Less potent vectors, (those that are not commonly considered as weapons) such as a ‘stock trading algorithm’, are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements). Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient ‘weapons’ to eliminate the offending machines.
If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.
That’s exactly where I disagree. Conventional weapons aren’t all that potent compared to social, economic, or environmental changes.
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.
To repeat what I said above: even a total launch of all the nuclear weapons in the world will not be sufficient to ensure human extinction. However, AI driven social, economic, and environmental changes could ensure just that.
If an AI got hold of a few nuclear weapons and launched them, that would, in fact, probably be counterproductive from the AI’s perspective, because in the face of such a clear warning sign, humanity would probably unite and shut down AI research and unplug its GPU clusters.