Depends on how we will engineer them. If we build an algorithm, knowing what it does, then perhaps yes. If we try some black-box development such as “make this huge neuron network, initialize it with random data, teach it, make a few randomly modified copies and select the ones that learn fastest, etc.” then I wouldn’t be surprised if after first thousand failed approaches, the first one able to really learn and self-improve would do something unexpected. The second approach seems more probable, because it’s simpler to try.
Also after the thousand failed experiments I predict human error in safety procedures, simply because they will feel completely unnecessary. For example, a member of the team will turn off the firewalls and connect to Facebook (for greater irony, it could be LessWrong), providing the new AI a simple escape route.
Also after the thousand failed experiments I predict human error in safety procedures, simply because they will feel completely unnecessary.
We do have some escaped criminals today. It’s not that we don’t know how to confine them securely, it’s more that we are not prepared to pay to do it. They do some damage, but it’s tolerable. What the escaped criminals tend not to do is build huge successful empires—and challenge large corporations or governments.
This isn’t likely to change as the world automates. The exterior civilization is unlikely to face serious challenges from escaped criminals. Instead it is likely to start out—and remain—much stronger than they are.
We don’t have recursively self-improving superhumanly intelligent criminals, yet. Only in comic books. Once we have a recursively self-improving superhuman AI, and it is not human-friendly, and it escapes… then we will have a comic-book situation in a real life. Except we won’t have a superhero on our side.
That’s comic-book stuff. Society is self-improving faster than its components. Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
The “superintelligent criminal” scenario is broadly like worrying about “grey goo”—or about a computer virus taking over the world. It makes much more sense to fear humans with powerful tools that magnify their wills. Indeed, the “superintelligent criminal” scenario may well be a destructive meme—since it distracts people from dealing with that much more realistic possibility.
Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.
It makes much more sense to fear humans with powerful tools that magnify their wills.
Could a superhuman AI use human allies and give them this kind of tools?
Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.
Sure, but look at the history of revolutions in large powerful demcracies. Of course, if North Korea develops machine intelligence, a revolution becomes more likely.
It makes much more sense to fear humans with powerful tools that magnify their wills.
Could a superhuman AI use human allies and give them this kind of tools?
That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI’s idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.
Depends on how we will engineer them. If we build an algorithm, knowing what it does, then perhaps yes. If we try some black-box development such as “make this huge neuron network, initialize it with random data, teach it, make a few randomly modified copies and select the ones that learn fastest, etc.” then I wouldn’t be surprised if after first thousand failed approaches, the first one able to really learn and self-improve would do something unexpected. The second approach seems more probable, because it’s simpler to try.
Also after the thousand failed experiments I predict human error in safety procedures, simply because they will feel completely unnecessary. For example, a member of the team will turn off the firewalls and connect to Facebook (for greater irony, it could be LessWrong), providing the new AI a simple escape route.
We do have some escaped criminals today. It’s not that we don’t know how to confine them securely, it’s more that we are not prepared to pay to do it. They do some damage, but it’s tolerable. What the escaped criminals tend not to do is build huge successful empires—and challenge large corporations or governments.
This isn’t likely to change as the world automates. The exterior civilization is unlikely to face serious challenges from escaped criminals. Instead it is likely to start out—and remain—much stronger than they are.
We don’t have recursively self-improving superhumanly intelligent criminals, yet. Only in comic books. Once we have a recursively self-improving superhuman AI, and it is not human-friendly, and it escapes… then we will have a comic-book situation in a real life. Except we won’t have a superhero on our side.
That’s comic-book stuff. Society is self-improving faster than its components. Component self-improvement trajectories tend to be limited by the government breaking them up or fencing them in whenever they grow too powerful.
The “superintelligent criminal” scenario is broadly like worrying about “grey goo”—or about a computer virus taking over the world. It makes much more sense to fear humans with powerful tools that magnify their wills. Indeed, the “superintelligent criminal” scenario may well be a destructive meme—since it distracts people from dealing with that much more realistic possibility.
Counterexample: any successful revolution. A subset of society became strong enough to overthrow the government, despite the government trying to stop them.
Could a superhuman AI use human allies and give them this kind of tools?
Sure, but look at the history of revolutions in large powerful demcracies. Of course, if North Korea develops machine intelligence, a revolution becomes more likely.
That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI’s idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.
Am I right about this?
Er, no—I consider machines to be agents.