We don’t have recursively self-improving superhumanly intelligent criminals, yet. Only in comic books. Once we have a recursively self-improving superhuman AI, and it is not human-friendly, and it escapes… then we will have a comic-book situation in a real life. Except we won’t have a superhero on our side.
That’s comic-book stuff. Society is self-improving faster than its components. Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
The “superintelligent criminal” scenario is broadly like worrying about “grey goo”—or about a computer virus taking over the world. It makes much more sense to fear humans with powerful tools that magnify their wills. Indeed, the “superintelligent criminal” scenario may well be a destructive meme—since it distracts people from dealing with that much more realistic possibility.
Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.
It makes much more sense to fear humans with powerful tools that magnify their wills.
Could a superhuman AI use human allies and give them this kind of tools?
Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.
Sure, but look at the history of revolutions in large powerful demcracies. Of course, if North Korea develops machine intelligence, a revolution becomes more likely.
It makes much more sense to fear humans with powerful tools that magnify their wills.
Could a superhuman AI use human allies and give them this kind of tools?
That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI’s idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.
We don’t have recursively self-improving superhumanly intelligent criminals, yet. Only in comic books. Once we have a recursively self-improving superhuman AI, and it is not human-friendly, and it escapes… then we will have a comic-book situation in a real life. Except we won’t have a superhero on our side.
That’s comic-book stuff. Society is self-improving faster than its components. Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
The “superintelligent criminal” scenario is broadly like worrying about “grey goo”—or about a computer virus taking over the world. It makes much more sense to fear humans with powerful tools that magnify their wills. Indeed, the “superintelligent criminal” scenario may well be a destructive meme—since it distracts people from dealing with that much more realistic possibility.
Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.
Could a superhuman AI use human allies and give them this kind of tools?
Sure, but look at the history of revolutions in large powerful demcracies. Of course, if North Korea develops machine intelligence, a revolution becomes more likely.
That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI’s idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.
Am I right about this?
Er, no—I consider machines to be agents.