However, weapons provide the most dangerous vector of attack for a rogue, confused, or otherwise misanthropic AI.
I’m not sure why you think that. Human weapons, as horrific as they are, can only cause localized tragedies. Even if we gave the AI access to all of our nuclear weapons, and it fired them all, humanity would not be wiped out. Millions (possibly billions) would perish. Civilization would likely collapse or be set back by centuries. But human extinction? No. We’re tougher than that.
But an AI that competes with humanity, in the same way that Homo sapiens competed with Homo neanderthalis? That could wipe out humanity. We wipe out other species all the time, and only in a small minority of cases is it because we’ve turned our weapons on them and hunted them into extinction. It’s far more common for species to go extinct because humanity needed the habitat and other natural resources that that species needed to survive, and outcompeted that species for access to those resources.
Entities compete in various ways, yes. Competition is an attack on another entities’ chances of survival. Let’s define a weapon as any tool which could be used to mount an attack. Of course, every tool could be used as a weapon, in some sense. It’s a question of how much risk our tools pose to us, if they were to be used against us.
Let’s define a weapon as any tool which could be used to mount an attack.
Why? That broadens the definition of “weapon” to mean literally any tool, technology, or tactic by which one person or organization can gain an advantage over another. It’s far broader than and connotationally very different from the implied definition of “weapon” given by “building intelligent machines that are designed to kill people” and the examples of “suicide drones”, “assassin drones” and “robot dogs with mounted guns”.
Redefining “weapon” in this way turns your argument into a motte-and-bailey, where you’re redefining a word that connotes direct physical harm (e.g. robots armed with guns, bombs, knives, etc) to mean any machine that can, on its own, gain some kind of resource advantage over humans. Most people would not, for example, consider a superior stock-trading algorithm to be a “weapon”, but your (re)definition, it would be.
It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional ‘weapons’. Less potent vectors, (those that are not commonly considered as weapons) such as a ‘stock trading algorithm’, are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements). Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient ‘weapons’ to eliminate the offending machines.
If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.
To repeat what I said above: even a total launch of all the nuclear weapons in the world will not be sufficient to ensure human extinction. However, AI driven social, economic, and environmental changes could ensure just that.
If an AI got hold of a few nuclear weapons and launched them, that would, in fact, probably be counterproductive from the AI’s perspective, because in the face of such a clear warning sign, humanity would probably unite and shut down AI research and unplug its GPU clusters.
I’m not sure why you think that. Human weapons, as horrific as they are, can only cause localized tragedies. Even if we gave the AI access to all of our nuclear weapons, and it fired them all, humanity would not be wiped out. Millions (possibly billions) would perish. Civilization would likely collapse or be set back by centuries. But human extinction? No. We’re tougher than that.
But an AI that competes with humanity, in the same way that Homo sapiens competed with Homo neanderthalis? That could wipe out humanity. We wipe out other species all the time, and only in a small minority of cases is it because we’ve turned our weapons on them and hunted them into extinction. It’s far more common for species to go extinct because humanity needed the habitat and other natural resources that that species needed to survive, and outcompeted that species for access to those resources.
Entities compete in various ways, yes. Competition is an attack on another entities’ chances of survival. Let’s define a weapon as any tool which could be used to mount an attack. Of course, every tool could be used as a weapon, in some sense. It’s a question of how much risk our tools pose to us, if they were to be used against us.
Why? That broadens the definition of “weapon” to mean literally any tool, technology, or tactic by which one person or organization can gain an advantage over another. It’s far broader than and connotationally very different from the implied definition of “weapon” given by “building intelligent machines that are designed to kill people” and the examples of “suicide drones”, “assassin drones” and “robot dogs with mounted guns”.
Redefining “weapon” in this way turns your argument into a motte-and-bailey, where you’re redefining a word that connotes direct physical harm (e.g. robots armed with guns, bombs, knives, etc) to mean any machine that can, on its own, gain some kind of resource advantage over humans. Most people would not, for example, consider a superior stock-trading algorithm to be a “weapon”, but your (re)definition, it would be.
It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional ‘weapons’. Less potent vectors, (those that are not commonly considered as weapons) such as a ‘stock trading algorithm’, are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements). Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient ‘weapons’ to eliminate the offending machines.
If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.
That’s exactly where I disagree. Conventional weapons aren’t all that potent compared to social, economic, or environmental changes.
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.
To repeat what I said above: even a total launch of all the nuclear weapons in the world will not be sufficient to ensure human extinction. However, AI driven social, economic, and environmental changes could ensure just that.
If an AI got hold of a few nuclear weapons and launched them, that would, in fact, probably be counterproductive from the AI’s perspective, because in the face of such a clear warning sign, humanity would probably unite and shut down AI research and unplug its GPU clusters.