Tuxedage and I interpreted this to mean that the AI party couldn’t offer things, but could point out real-world consequences beyond their control. Some people on #lesswrong disagreed with the second part.
I interpreted it the same way as #lesswrong. Has anyone tried asking him? He’s pretty forthcoming regarding the rules, since they make the success more impressive.
EDIT: I’m having trouble thinking of an emotional attack that could get an AI out of a box, in a short time, especially since the guard and AI are both assumed personas.
I interpreted it the same way as #lesswrong. Has anyone tried asking him? He’s pretty forthcoming regarding the rules, since they make the success more impressive.
EDIT: I’m having trouble thinking of an emotional attack that could get an AI out of a box, in a short time, especially since the guard and AI are both assumed personas.