A general intelligence that only cares about answering the question given to it does just that. As effectively as it can with the resources it has available to it. Unless it is completely isolated from all external sources of information it will proceed directly to creating more of itself as soon as it has been given a difficult question. The very best you could hope for if the question answer is completely isolated is an AI Box. If Pat is the gatekeeper then R. I. P. humanity.
This need not be the case. Whenever we talk about software “wanting” something, we are of course speaking metaphorically. It might be straightforward to build a super-duper Watson or Wolfram Alpha, that responds to natural queries “intelligently”, without the slightest propensity to self-modify or radically alter the world. You might even imagine such a system having a background thread trying to pre-compute answers to interesting questions and share them with humans, once per day, without any ability to self-modify or significant probability of radical alteration to human society.
You have a point, but a powerful question-answering device can be dangerous even if it stays inside the box. You could ask it how to build nanotech. You could ask it how to build an AI that would uphold national security. You could ask it who’s likely to commit a crime tomorrow, and receive an answer that manipulates you to let the crime happen so the prediction stays correct.
This depends how powerful the answerer is. If it’s as good as a human expert, it’s probably not dangerous—at least, human experts aren’t. Certainly, I would rather keep such a system out of the hands of criminals or the insane—but it doesn’t seem like that system, alone, would be a serious risk to humanity.
The one who poses the Answering machine is the friendly or is not friendly. The whole system—Oracle+Owner(User) - is a rouge or quite friendly SAI then.
The whole problem shifts a little, but doesn’t change very much for the rest of the humanity.
This need not be the case. Whenever we talk about software “wanting” something, we are of course speaking metaphorically. It might be straightforward to build a super-duper Watson or Wolfram Alpha, that responds to natural queries “intelligently”, without the slightest propensity to self-modify or radically alter the world. You might even imagine such a system having a background thread trying to pre-compute answers to interesting questions and share them with humans, once per day, without any ability to self-modify or significant probability of radical alteration to human society.
You have a point, but a powerful question-answering device can be dangerous even if it stays inside the box. You could ask it how to build nanotech. You could ask it how to build an AI that would uphold national security. You could ask it who’s likely to commit a crime tomorrow, and receive an answer that manipulates you to let the crime happen so the prediction stays correct.
This depends how powerful the answerer is. If it’s as good as a human expert, it’s probably not dangerous—at least, human experts aren’t. Certainly, I would rather keep such a system out of the hands of criminals or the insane—but it doesn’t seem like that system, alone, would be a serious risk to humanity.
Human experts are dangerous. Ones that are easily copiable and do not have any scruples built in are way more dangerous.
The one who poses the Answering machine is the friendly or is not friendly. The whole system—Oracle+Owner(User) - is a rouge or quite friendly SAI then.
The whole problem shifts a little, but doesn’t change very much for the rest of the humanity.