Values in the middle indicate that I’ll have a conversation and probably not budge, with a chance of being totally convinced. But I am now convinced that the whole idea of boxing is stupid. Why would I honestly have a conversation? Why would I run a transhuman AI that I didn’t want to take over the world? What could I learn from a conversation that I wouldn’t already know from the source code. other than that it doesn’t immediately break? And why would I need to check that the AI doesn’t immediately break, unless I wanted to release it?
Why would I honestly have a conversation? Why would I run a transhuman AI that I didn’t want to take over the world?
Because you’d want to know how to cure cancer, how to best defeat violent religious fundamentalism, etc, etc. If you want to become President, the AI may need to teach you argumentation techniques. And so forth.
Values in the middle indicate that I’ll have a conversation and probably not budge, with a chance of being totally convinced.
Ah, so it’s more like the probability that the intelligence in the box is over the threshold required to convince you of something. That makes sense.
But I am now convinced that the whole idea of boxing is stupid.
Agreed. Everything you said, plus: if you think there’s a chance that your boxed AI might be malicious/unFriendly, talking to it has to be one of the stupidest things you could possibly do...
Values in the middle indicate that I’ll have a conversation and probably not budge, with a chance of being totally convinced. But I am now convinced that the whole idea of boxing is stupid. Why would I honestly have a conversation? Why would I run a transhuman AI that I didn’t want to take over the world? What could I learn from a conversation that I wouldn’t already know from the source code. other than that it doesn’t immediately break? And why would I need to check that the AI doesn’t immediately break, unless I wanted to release it?
Because you’d want to know how to cure cancer, how to best defeat violent religious fundamentalism, etc, etc. If you want to become President, the AI may need to teach you argumentation techniques. And so forth.
Ah, so it’s more like the probability that the intelligence in the box is over the threshold required to convince you of something. That makes sense.
Agreed. Everything you said, plus: if you think there’s a chance that your boxed AI might be malicious/unFriendly, talking to it has to be one of the stupidest things you could possibly do...