No, I think that short of a demonstration that strong AI is unfeasible, there is no way to actually defuse the risk enough that it would matter much. Even a very sophisticated autistic (limited set of abilities) AI, that never undergoes recursive self-improvement, but which does nonetheless possess some superhuman capabilities (which any AI has: superior memory, direct data access etc.), could pose an existential risk.
Take for example what is happening in Syria right now. The only reason that they do not squelch any protests is that nobody can supervise or control that many people. Give them an AGI, that can watch a few million security cameras and control thousands of drones, and they will destroy most of all human values by implementing a world-wide dictatorship or theocracy.
You seem to be implying that if both the authorities and the insurgents have access to equally powerful AGI, then this works to the net benefit of the authorities.
I am skeptical of that premise, especially in the context of open revolt as we’re seeing in Syria. I don’t think lack of eyeballs on cameras is a significant mechanism there; plain old ordinary human secret police would do fine for that, since people are protesting openly. The key dynamic I see is that the regime isn’t confident that the police or army will obey orders if driven to use lethal force on a large scale.
I don’t see how AI would change that dynamic. If both sides have it, the protesters can optimize their actions to stay within the sphere of uncertainty, even as the government is trying to act as aggressively as it can, without risking the military joining the rebels.
Today, we already have much more sophisticated weapons, communication, information storage, and information retrieval technology than was ever available before. It doesn’t appear to have been a clear net benefit for either freedom or tyranny.
Do you envision AGI strengthening authorities in ways that 20th-century coercive technologies did not?
No, I think that short of a demonstration that strong AI is unfeasible, there is no way to actually defuse the risk enough that it would matter much. Even a very sophisticated autistic (limited set of abilities) AI, that never undergoes recursive self-improvement, but which does nonetheless possess some superhuman capabilities (which any AI has: superior memory, direct data access etc.), could pose an existential risk.
Take for example what is happening in Syria right now. The only reason that they do not squelch any protests is that nobody can supervise or control that many people. Give them an AGI, that can watch a few million security cameras and control thousands of drones, and they will destroy most of all human values by implementing a world-wide dictatorship or theocracy.
You seem to be implying that if both the authorities and the insurgents have access to equally powerful AGI, then this works to the net benefit of the authorities.
I am skeptical of that premise, especially in the context of open revolt as we’re seeing in Syria. I don’t think lack of eyeballs on cameras is a significant mechanism there; plain old ordinary human secret police would do fine for that, since people are protesting openly. The key dynamic I see is that the regime isn’t confident that the police or army will obey orders if driven to use lethal force on a large scale.
I don’t see how AI would change that dynamic. If both sides have it, the protesters can optimize their actions to stay within the sphere of uncertainty, even as the government is trying to act as aggressively as it can, without risking the military joining the rebels.
Today, we already have much more sophisticated weapons, communication, information storage, and information retrieval technology than was ever available before. It doesn’t appear to have been a clear net benefit for either freedom or tyranny.
Do you envision AGI strengthening authorities in ways that 20th-century coercive technologies did not?