Super Google Maps cannot turn my home into a McDonald’s or build a new road by sending me an answer.
Unless it could e.g. hypnotize me by a text message to do it myself. Let’s assume for a moment that hypnosis via text-only channel is possible, and it is possible to do it so that human will not notice anything unusual until it’s too late. If this would be true, and the Super Google Maps would be able to get this knowledge and skills, then the results would probably depend on the technical details of definition of the utility function—does the utility function measure my distance to a McDonald’s which existed at the moment of asking the question, or a distance to a McDonald’s existing at the moment of my arrival. The former could not be fixed by hypnosis, the latter could.
Now imagine a more complex task, where people will actually do something based on the AI’s answer. In the example above I will also do something—travel to the reported McDonald’s—but this action cannot be easily converted into “build a McDonald’s” or “build a new road”. But if that complex task would include building something, then it opens more opportunities. Especially if it includes constructing robots (or nanorobots), that is possibly autonomous general-purpose builders. Then the correct (utility-maximizing) answer could include an instruction to build a robot with a hidden function that human builders won’t notice.
Generally, a passive AI’s answers are only safe if we don’t act on them in a way which could be predicted by a passive AI and used to achieve a real-world goal. If the Super Google Maps can only make me choose McDonald’s A or McDonald’s B, it is impossible to change the world through this channel. But if I instead ask Super Paintbrush to paint me an integrated circuit for my robotic homework, that opens much wider channel.
But if that complex task would include building something, then it opens more opportunities. Especially if it includes constructing robots (or nanorobots), that is possibly autonomous general-purpose builders. Then the correct (utility-maximizing) answer could include an instruction to build a robot with a hidden function that human builders won’t notice.
But it isn’t the correct answer. Only if you assume a specific kind of AGI design that nobody would deliberately create, if it is possible at all.
The question is how current research is supposed to lead from well-behaved and fine-tuned systems to systems that stop to work correctly in a highly complex and unbounded way.
Imagine you went to IBM and told them that improving IBM Watson will at some point make it hypnotize them or create nanobots and feed them with hidden instructions. They would likely ask you at what point that is supposed to happen. Is it going to happen once they give IBM Watson the capability to access the Internet? How so? Is it going to happen once they give it the capability to alter it search algorithms? How so? Is it going to happen once they make it protect its servers from hackers by giving it control over a firewall? How so? Is it going to happen once IBM Watson is given control over the local alarm system? How so...? At what point would IBM Watson return dangerous answers? At what point would any drive emerge that causes it to take complex and unbounded actions that it was never programmed to take?
Super Google Maps cannot turn my home into a McDonald’s or build a new road by sending me an answer.
Unless it could e.g. hypnotize me by a text message to do it myself. Let’s assume for a moment that hypnosis via text-only channel is possible, and it is possible to do it so that human will not notice anything unusual until it’s too late. If this would be true, and the Super Google Maps would be able to get this knowledge and skills, then the results would probably depend on the technical details of definition of the utility function—does the utility function measure my distance to a McDonald’s which existed at the moment of asking the question, or a distance to a McDonald’s existing at the moment of my arrival. The former could not be fixed by hypnosis, the latter could.
Now imagine a more complex task, where people will actually do something based on the AI’s answer. In the example above I will also do something—travel to the reported McDonald’s—but this action cannot be easily converted into “build a McDonald’s” or “build a new road”. But if that complex task would include building something, then it opens more opportunities. Especially if it includes constructing robots (or nanorobots), that is possibly autonomous general-purpose builders. Then the correct (utility-maximizing) answer could include an instruction to build a robot with a hidden function that human builders won’t notice.
Generally, a passive AI’s answers are only safe if we don’t act on them in a way which could be predicted by a passive AI and used to achieve a real-world goal. If the Super Google Maps can only make me choose McDonald’s A or McDonald’s B, it is impossible to change the world through this channel. But if I instead ask Super Paintbrush to paint me an integrated circuit for my robotic homework, that opens much wider channel.
But it isn’t the correct answer. Only if you assume a specific kind of AGI design that nobody would deliberately create, if it is possible at all.
The question is how current research is supposed to lead from well-behaved and fine-tuned systems to systems that stop to work correctly in a highly complex and unbounded way.
Imagine you went to IBM and told them that improving IBM Watson will at some point make it hypnotize them or create nanobots and feed them with hidden instructions. They would likely ask you at what point that is supposed to happen. Is it going to happen once they give IBM Watson the capability to access the Internet? How so? Is it going to happen once they give it the capability to alter it search algorithms? How so? Is it going to happen once they make it protect its servers from hackers by giving it control over a firewall? How so? Is it going to happen once IBM Watson is given control over the local alarm system? How so...? At what point would IBM Watson return dangerous answers? At what point would any drive emerge that causes it to take complex and unbounded actions that it was never programmed to take?