There are a number of aspects of EY’s ruleset I dislike. For instance, his ruleset allows the Gatekeeper to type “k” after every statement the AI writes, without needing to read and consider what the AI argues. I think it’s fair to say that this is against the spirit of the experiment, and thus I have disallowed it in this ruleset. The EY Ruleset also allows the gatekeeper to check facebook, chat on IRC, or otherwise multitask whilst doing the experiment. I’ve found this to break immersion, and therefore it’s also banned in the Tuxedage Ruleset.
Eliezer’s rules uphold the spirit of the experiment in that making things easier for the AI goes very much against what we should expect of any sort of gatekeeping procedure.
I think the gatekeeper having to pay attention to the AI is very in the spirit of the experiment. In the real world, if you built an AI in a box and ignored it, then why build it in the first place?
For the experiment to work at all the Gatekeeper should read it yes, but having to think out clever responses or even typing full sentences all the time seems to stretch it. “I don´t want to talk about it” or simply silence could be allowed as a response as long as the Gatekeeper actually reads what the AI types.
We shouldn’t gratuitously make things easier for the AI player, but rules functioning to keep both parties in character seem like they can only improve the experiment as a model.
I’m less sure about requiring the gatekeeper to read and consider all the AI player’s statements. Certainly you could make a realism case for it; there’s not much point in keeping an AI around if all you’re going to do is type “lol” at it, except perhaps as an exotic form of sadism. But it seems like it could lead to more rules lawyering than it’s worth, given the people likely to be involved.
Eliezer’s rules uphold the spirit of the experiment in that making things easier for the AI goes very much against what we should expect of any sort of gatekeeping procedure.
I think the gatekeeper having to pay attention to the AI is very in the spirit of the experiment. In the real world, if you built an AI in a box and ignored it, then why build it in the first place?
For the experiment to work at all the Gatekeeper should read it yes, but having to think out clever responses or even typing full sentences all the time seems to stretch it. “I don´t want to talk about it” or simply silence could be allowed as a response as long as the Gatekeeper actually reads what the AI types.
We shouldn’t gratuitously make things easier for the AI player, but rules functioning to keep both parties in character seem like they can only improve the experiment as a model.
I’m less sure about requiring the gatekeeper to read and consider all the AI player’s statements. Certainly you could make a realism case for it; there’s not much point in keeping an AI around if all you’re going to do is type “lol” at it, except perhaps as an exotic form of sadism. But it seems like it could lead to more rules lawyering than it’s worth, given the people likely to be involved.