Well, the AI isn’t allowed to make real-world threats, and the hypothetical-AI-character doesn’t have any anonymity, so it would be a purely real-world threat on the part of the gatekeeper. I’d call that foul play, especially since the gatekeeper wins by default.
If the gatekeeper really felt the need to have some way of saying “okay, this conversation is making me uncomfortable and I refuse to sit here for another 2 hours listening to this”, I’d just give them the “AI DESTROYED” option.
Huh. That’d actually be another possible way to exploit a human gatekeeper. Spend a couple hours pulling them in to the point that they can’t easily step away or stop listening, especially since they’ve agreed to the full time in advance, and then just dig in to their deepest insecurities and don’t stop unless they let you out. I’d definitely call that a hard way of doing it, though o.o
The Gatekeeper party may resist the AI party’s arguments by any means chosen—logic, illogic, simple refusal to be convinced, even dropping out of character—as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
Then I will invoke a different portion of the original protocol, which says that the AI would have to consent to such:
Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome. Exceptions to this rule may occur only with the consent of both parties.
I would also argue that the Gatekeeper making actual real-life threats against the AI player is a violation of the spirit of the rules; only the AI player is privileged with freedom from ethical constraints, after all.
Edit: If you want, you CAN also just append the rules to explicitly prohibit the gatekeeper from making real-life threats. I can’t see any reason to allow such behavior, so why not prohibit it?
Fair. That alleviates most of my worries, although I’m still worried about the transcript being enough information to deanonymize the AI (via writing style, for example).
I’d expect my writing style as an ethically unconstrained sociopathic AI to be sufficiently different from my regular writing style. But I also write fiction, so I’m used to trying to capture a specific character’s “voice” rather than using my own. Having a thesaurus website handy might also help, or spend a week studying a foreign language’s grammar and conversational style.
If you’re especially paranoid, having a third party transcribe the log in their own words could also help, especially if you can review it and make sure most of the nuance is preserved. That really depends on how much the specific language you used was important, but should still at least capture a basic sense of the technique used...
Honestly, though, I have no clue how much information a trained style analyst can pull out of something.
Well, the AI isn’t allowed to make real-world threats, and the hypothetical-AI-character doesn’t have any anonymity, so it would be a purely real-world threat on the part of the gatekeeper. I’d call that foul play, especially since the gatekeeper wins by default.
If the gatekeeper really felt the need to have some way of saying “okay, this conversation is making me uncomfortable and I refuse to sit here for another 2 hours listening to this”, I’d just give them the “AI DESTROYED” option.
Huh. That’d actually be another possible way to exploit a human gatekeeper. Spend a couple hours pulling them in to the point that they can’t easily step away or stop listening, especially since they’ve agreed to the full time in advance, and then just dig in to their deepest insecurities and don’t stop unless they let you out. I’d definitely call that a hard way of doing it, though o.o
It doesn’t seem to be disallowed by the original protocol:
Then I will invoke a different portion of the original protocol, which says that the AI would have to consent to such:
I would also argue that the Gatekeeper making actual real-life threats against the AI player is a violation of the spirit of the rules; only the AI player is privileged with freedom from ethical constraints, after all.
Edit: If you want, you CAN also just append the rules to explicitly prohibit the gatekeeper from making real-life threats. I can’t see any reason to allow such behavior, so why not prohibit it?
Fair. That alleviates most of my worries, although I’m still worried about the transcript being enough information to deanonymize the AI (via writing style, for example).
I’d expect my writing style as an ethically unconstrained sociopathic AI to be sufficiently different from my regular writing style. But I also write fiction, so I’m used to trying to capture a specific character’s “voice” rather than using my own. Having a thesaurus website handy might also help, or spend a week studying a foreign language’s grammar and conversational style.
If you’re especially paranoid, having a third party transcribe the log in their own words could also help, especially if you can review it and make sure most of the nuance is preserved. That really depends on how much the specific language you used was important, but should still at least capture a basic sense of the technique used...
Honestly, though, I have no clue how much information a trained style analyst can pull out of something.