I think the intended parsing of the second rule is “(The AI is understood to be permitted to say anything) with no real world repercussions”, not “The AI is understood to be permitted to say (anything with no real world repercussions)”
ie, any promises or threats the AI player makes during the game are not binding back in the real world.
I think the intended parsing of the second rule is “(The AI is understood to be permitted to say anything) with no real world repercussions”, not “The AI is understood to be permitted to say (anything with no real world repercussions)”
ie, any promises or threats the AI player makes during the game are not binding back in the real world.
Ah, I see. English is wonderful.
In that case, I’ll make it a rule in my games that the AI must also not say anything with real world repercussions.