You have a point, but a powerful question-answering device can be dangerous even if it stays inside the box. You could ask it how to build nanotech. You could ask it how to build an AI that would uphold national security. You could ask it who’s likely to commit a crime tomorrow, and receive an answer that manipulates you to let the crime happen so the prediction stays correct.
This depends how powerful the answerer is. If it’s as good as a human expert, it’s probably not dangerous—at least, human experts aren’t. Certainly, I would rather keep such a system out of the hands of criminals or the insane—but it doesn’t seem like that system, alone, would be a serious risk to humanity.
You have a point, but a powerful question-answering device can be dangerous even if it stays inside the box. You could ask it how to build nanotech. You could ask it how to build an AI that would uphold national security. You could ask it who’s likely to commit a crime tomorrow, and receive an answer that manipulates you to let the crime happen so the prediction stays correct.
This depends how powerful the answerer is. If it’s as good as a human expert, it’s probably not dangerous—at least, human experts aren’t. Certainly, I would rather keep such a system out of the hands of criminals or the insane—but it doesn’t seem like that system, alone, would be a serious risk to humanity.
Human experts are dangerous. Ones that are easily copiable and do not have any scruples built in are way more dangerous.