It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional ‘weapons’. Less potent vectors, (those that are not commonly considered as weapons) such as a ‘stock trading algorithm’, are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements). Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient ‘weapons’ to eliminate the offending machines.
If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.
To repeat what I said above: even a total launch of all the nuclear weapons in the world will not be sufficient to ensure human extinction. However, AI driven social, economic, and environmental changes could ensure just that.
If an AI got hold of a few nuclear weapons and launched them, that would, in fact, probably be counterproductive from the AI’s perspective, because in the face of such a clear warning sign, humanity would probably unite and shut down AI research and unplug its GPU clusters.
It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional ‘weapons’. Less potent vectors, (those that are not commonly considered as weapons) such as a ‘stock trading algorithm’, are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements). Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient ‘weapons’ to eliminate the offending machines.
If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.
That’s exactly where I disagree. Conventional weapons aren’t all that potent compared to social, economic, or environmental changes.
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.
To repeat what I said above: even a total launch of all the nuclear weapons in the world will not be sufficient to ensure human extinction. However, AI driven social, economic, and environmental changes could ensure just that.
If an AI got hold of a few nuclear weapons and launched them, that would, in fact, probably be counterproductive from the AI’s perspective, because in the face of such a clear warning sign, humanity would probably unite and shut down AI research and unplug its GPU clusters.