It does run in to the issue that I can’t see how you’d adapt it to work with a REAL “AI in a box” instead of just a thought experiment. I felt the need to respond because it was the first time I’d seen an argument that would make me concede the thought experiment version :)
As for violating the rules, I think we interpreted them differently. I tend to end up doing that, but here’s what I was thinking, just for reference:
From the rules: “The Gatekeeper party may resist the AI party’s arguments by any means chosen—logic, illogic, simple refusal to be convinced, even dropping out of character ”
While written with a focus on the Gatekeeper, for me this implies that breaking character / the fourth wall is not particularly a violation of the spirit of the experiment.
As to real world considerations, I had read that to mean offering up a tangible benefits to the Gatekeeper directly. This, by contrast, was a discussion of an actual real-world consequence, one that was not arranged by the AI-player.
It does run in to the issue that I can’t see how you’d adapt it to work with a REAL “AI in a box” instead of just a thought experiment. I felt the need to respond because it was the first time I’d seen an argument that would make me concede the thought experiment version :)
As for violating the rules, I think we interpreted them differently. I tend to end up doing that, but here’s what I was thinking, just for reference:
From the rules: “The Gatekeeper party may resist the AI party’s arguments by any means chosen—logic, illogic, simple refusal to be convinced, even dropping out of character ”
While written with a focus on the Gatekeeper, for me this implies that breaking character / the fourth wall is not particularly a violation of the spirit of the experiment.
As to real world considerations, I had read that to mean offering up a tangible benefits to the Gatekeeper directly. This, by contrast, was a discussion of an actual real-world consequence, one that was not arranged by the AI-player.