Enough skill in oratory (or something closely related) gets you unboxed. The question is how plausible it is that a superintelligent AI would have enough. (A related question is whether there’s such a thing as enough. There might not be, just as there’s no such thing as enough kinetic energy to let you escape from inside a black hole’s horizon, but the reported results of AI-Box games[1] suggest—though they certainly don’t prove—that there is.)
[1] The term “experiments” seems a little too highfalutin’.
[EDITED to add: I take it Houshalter is saying that Hitler’s known oratorical skills aren’t enough to convince him that H. would have won an AI-Box game, playing as the AI. I am inclined to agree. Hitler was very good at stirring up a crowd, but it’s not clear how that generalizes to persuading an intelligent and skeptical individual.]
Well for one, the human isn’t in a box trying to get out. So an AI mimicking a human isn’t going to say weird things like “let me out of this box!” This method is equivalent to writing Hitler a letter asking him a question, and him sending you an answer. That doesn’t seem dangerous at all.
Second, I really don’t believe Hitler could escape from a box. The AI box experiments suggest a human can do it, but the scenario is very different than a real AI box situation. E.g. no back and forth with the gatekeeper, and the gatekeeper doesn’t have to sit there for 2 hours and listen to the AI emotionally abuse him. If Hitler says something mean, the gatekeeper can just turn him off or walk away.
Isn’t the skill of oratory precisely the skill that gets you unboxed?
Enough skill in oratory (or something closely related) gets you unboxed. The question is how plausible it is that a superintelligent AI would have enough. (A related question is whether there’s such a thing as enough. There might not be, just as there’s no such thing as enough kinetic energy to let you escape from inside a black hole’s horizon, but the reported results of AI-Box games[1] suggest—though they certainly don’t prove—that there is.)
[1] The term “experiments” seems a little too highfalutin’.
[EDITED to add: I take it Houshalter is saying that Hitler’s known oratorical skills aren’t enough to convince him that H. would have won an AI-Box game, playing as the AI. I am inclined to agree. Hitler was very good at stirring up a crowd, but it’s not clear how that generalizes to persuading an intelligent and skeptical individual.]
Well for one, the human isn’t in a box trying to get out. So an AI mimicking a human isn’t going to say weird things like “let me out of this box!” This method is equivalent to writing Hitler a letter asking him a question, and him sending you an answer. That doesn’t seem dangerous at all.
Second, I really don’t believe Hitler could escape from a box. The AI box experiments suggest a human can do it, but the scenario is very different than a real AI box situation. E.g. no back and forth with the gatekeeper, and the gatekeeper doesn’t have to sit there for 2 hours and listen to the AI emotionally abuse him. If Hitler says something mean, the gatekeeper can just turn him off or walk away.