This Pat guy seems to be the most clueless of the bunch.
Pat Hayes: No. There is no reason to suppose that any manufactured system will have any emotional stance towards us of any kind, friendly or unfriendly. In fact, even if the idea of “human-level” made sense, we could have a more-than-human-level super-intelligent machine, and still have it bear no emotional stance towards other entities whatsoever.
Exactly, it doesn’t care about humans. It isn’t friendly to them. Non-friendly. That’s what unfriendly means as a technical word in this context. Not ‘nasty’ or malicious. Just not friendly. That should be terrifying.
Nor need it have any lust for power or political ambitions, unless we set out to construct such a thing (which AFAIK, nobody is doing.) Think of an unworldly boffin who just wants to be left alone to think, and does not care a whit for changing the world for better or for worse, and has no intentions or desires, but simply answers questions that are put to it and thinks about htings that it is asked to think about.
Boom! A light cone of computronium. Ooops.
What does a ‘boffin’ do when it wants to answer a question it doesn’t yet have an answer for? It researchers, studies and thinks. A general intelligence that only cares about answering the question given to it does just that. As effectively as it can with the resources it has available to it. Unless it is completely isolated from all external sources of information it will proceed directly to creating more of itself as soon as it has been given a difficult question. The very best you could hope for if the question answer is completely isolated is an AI Box. If Pat is the gatekeeper then R. I. P. humanity.
It has no ambition and in any case no means to achieve any far-reaching changes even if it “wanted” to do so. It seems to me that this is what a super-intelligent question-answering system would be like. I see no inherent, even slight, danger arising from the presence of such a device.
A general intelligence that only cares about answering the question given to it does just that. As effectively as it can with the resources it has available to it. Unless it is completely isolated from all external sources of information it will proceed directly to creating more of itself as soon as it has been given a difficult question. The very best you could hope for if the question answer is completely isolated is an AI Box. If Pat is the gatekeeper then R. I. P. humanity.
This need not be the case. Whenever we talk about software “wanting” something, we are of course speaking metaphorically. It might be straightforward to build a super-duper Watson or Wolfram Alpha, that responds to natural queries “intelligently”, without the slightest propensity to self-modify or radically alter the world. You might even imagine such a system having a background thread trying to pre-compute answers to interesting questions and share them with humans, once per day, without any ability to self-modify or significant probability of radical alteration to human society.
You have a point, but a powerful question-answering device can be dangerous even if it stays inside the box. You could ask it how to build nanotech. You could ask it how to build an AI that would uphold national security. You could ask it who’s likely to commit a crime tomorrow, and receive an answer that manipulates you to let the crime happen so the prediction stays correct.
This depends how powerful the answerer is. If it’s as good as a human expert, it’s probably not dangerous—at least, human experts aren’t. Certainly, I would rather keep such a system out of the hands of criminals or the insane—but it doesn’t seem like that system, alone, would be a serious risk to humanity.
The one who poses the Answering machine is the friendly or is not friendly. The whole system—Oracle+Owner(User) - is a rouge or quite friendly SAI then.
The whole problem shifts a little, but doesn’t change very much for the rest of the humanity.
This Pat guy seems to be the most clueless of the bunch.
Exactly, it doesn’t care about humans. It isn’t friendly to them. Non-friendly. That’s what unfriendly means as a technical word in this context. Not ‘nasty’ or malicious. Just not friendly. That should be terrifying.
Boom! A light cone of computronium. Ooops.
What does a ‘boffin’ do when it wants to answer a question it doesn’t yet have an answer for? It researchers, studies and thinks. A general intelligence that only cares about answering the question given to it does just that. As effectively as it can with the resources it has available to it. Unless it is completely isolated from all external sources of information it will proceed directly to creating more of itself as soon as it has been given a difficult question. The very best you could hope for if the question answer is completely isolated is an AI Box. If Pat is the gatekeeper then R. I. P. humanity.
It has no ambition and in any case no means to achieve any far-reaching changes even if it “wanted” to do so. It seems to me that this is what a super-intelligent question-answering system would be like. I see no inherent, even slight, danger arising from the presence of such a device.
This need not be the case. Whenever we talk about software “wanting” something, we are of course speaking metaphorically. It might be straightforward to build a super-duper Watson or Wolfram Alpha, that responds to natural queries “intelligently”, without the slightest propensity to self-modify or radically alter the world. You might even imagine such a system having a background thread trying to pre-compute answers to interesting questions and share them with humans, once per day, without any ability to self-modify or significant probability of radical alteration to human society.
You have a point, but a powerful question-answering device can be dangerous even if it stays inside the box. You could ask it how to build nanotech. You could ask it how to build an AI that would uphold national security. You could ask it who’s likely to commit a crime tomorrow, and receive an answer that manipulates you to let the crime happen so the prediction stays correct.
This depends how powerful the answerer is. If it’s as good as a human expert, it’s probably not dangerous—at least, human experts aren’t. Certainly, I would rather keep such a system out of the hands of criminals or the insane—but it doesn’t seem like that system, alone, would be a serious risk to humanity.
Human experts are dangerous. Ones that are easily copiable and do not have any scruples built in are way more dangerous.
The one who poses the Answering machine is the friendly or is not friendly. The whole system—Oracle+Owner(User) - is a rouge or quite friendly SAI then.
The whole problem shifts a little, but doesn’t change very much for the rest of the humanity.