I’m really genuinely curious where the confusion in this argument is coming from, so let’s try this:
1) By the rules, the AI player gets to dictate the results of EVERY test the Gatekeeper performs.
2) From 1, we can derive that the AI is already effectively unboxed, since it can DICTATE the state of reality.
3) Given 2, the AI player has already been released, and all that remains is to make the Gatekeeper accept that this is true.
Dorikka’s objection was that #1 is false, since the Gatekeeper has final veto authority. As near as I can tell, you and Vladimir’s objection is just “nuh-uh!!”, but… you wouldn’t be here if you didn’t have better arguments than that, so I assume this simply reflects my own failure to understand you.
Perhaps you should be saying “trying to type AI DESTROYED is a test of whether you can destroy me and I can decide it’s result” not “I prove you wont do it.” I hadn’t seen your point clearly till this comment.
I’m really genuinely curious where the confusion in this argument is coming from, so let’s try this:
1) By the rules, the AI player gets to dictate the results of EVERY test the Gatekeeper performs. 2) From 1, we can derive that the AI is already effectively unboxed, since it can DICTATE the state of reality. 3) Given 2, the AI player has already been released, and all that remains is to make the Gatekeeper accept that this is true.
Dorikka’s objection was that #1 is false, since the Gatekeeper has final veto authority. As near as I can tell, you and Vladimir’s objection is just “nuh-uh!!”, but… you wouldn’t be here if you didn’t have better arguments than that, so I assume this simply reflects my own failure to understand you.
Perhaps you should be saying “trying to type AI DESTROYED is a test of whether you can destroy me and I can decide it’s result” not “I prove you wont do it.” I hadn’t seen your point clearly till this comment.
Then I am very glad I made that comment, and thank you for the feedback! :)