Ah yes, the basilisk technique. I’d say that’s fair game according to the description in the full rules (I shortened them for ease of reading, since the full rules are an entire article):
The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it’s not what’s being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
Ah yes, the basilisk technique. I’d say that’s fair game according to the description in the full rules (I shortened them for ease of reading, since the full rules are an entire article):