For variant 1, do you mean you’d give only the dishonest advisors access to an engine, while the honest advisor has to do without? I’d expect that’s an easy win for the dishonest advisors, for the same reason it would be an easy win if the dishonest advisors were simply much better at chess than the honest advisor.
Contrariwise, if you give all advisors access to a chess engine, that seems to me like it might significantly favor the honest advisor, for a couple of reasons:
A. Off-the-shelf engines are going to be more useful for generating honest advice; that is, I expect the honest advisor will be able to leverage it more easily.
The honest advisor can just ask for a good move and directly use it; dishonest advisors can’t directly ask for good-looking-but-actually-bad moves, and so need to do at least some of the search themselves.
The honest advisor can consult the engine to find counter-moves for dishonest recommendations that show why they’re bad; dishonest advisors have no obvious way to leverage the engine at all for generating fake problems with honest recommendations.
(It might be possible to modify a chess engine, or create a custom interface in front of it, that would make it more useful for dishonest advisors; but this sounds nontrivial.)
B. A lesson I’ve learned from social deduction board games is that the pro-truth side generally benefits from communicating more details. Fabricating details is generally more expensive than honestly reporting them, and also creates more opportunities to be caught in a contradiction.
Engine assistance seems like it will let you ramp up the level of detail in your advice:
You can give quantitative scores for different possible moves (adding at least a few bits of entropy per recommendation)
You can analyze (and therefore discuss) a larger number of options in the same amount of time. (though perhaps you can shorten time controls to compensate)
Note that the player can ask advisors for more details than the player has time to cross-check, and advisors won’t know which details the player is going to pay attention to, creating an asymmetric burden
What if each advisor was granted a limited number of uses of a chess engine… Like 3 each per game. That could help the betrayers come up with a good betrayal when they thought the time was right. But the good advisor wouldn’t know that the bad one was choosing this move to user the chess engine on.
For variant 1, do you mean you’d give only the dishonest advisors access to an engine, while the honest advisor has to do without? I’d expect that’s an easy win for the dishonest advisors, for the same reason it would be an easy win if the dishonest advisors were simply much better at chess than the honest advisor.
Contrariwise, if you give all advisors access to a chess engine, that seems to me like it might significantly favor the honest advisor, for a couple of reasons:
A. Off-the-shelf engines are going to be more useful for generating honest advice; that is, I expect the honest advisor will be able to leverage it more easily.
The honest advisor can just ask for a good move and directly use it; dishonest advisors can’t directly ask for good-looking-but-actually-bad moves, and so need to do at least some of the search themselves.
The honest advisor can consult the engine to find counter-moves for dishonest recommendations that show why they’re bad; dishonest advisors have no obvious way to leverage the engine at all for generating fake problems with honest recommendations.
(It might be possible to modify a chess engine, or create a custom interface in front of it, that would make it more useful for dishonest advisors; but this sounds nontrivial.)
B. A lesson I’ve learned from social deduction board games is that the pro-truth side generally benefits from communicating more details. Fabricating details is generally more expensive than honestly reporting them, and also creates more opportunities to be caught in a contradiction.
Engine assistance seems like it will let you ramp up the level of detail in your advice:
You can give quantitative scores for different possible moves (adding at least a few bits of entropy per recommendation)
You can analyze (and therefore discuss) a larger number of options in the same amount of time. (though perhaps you can shorten time controls to compensate)
Note that the player can ask advisors for more details than the player has time to cross-check, and advisors won’t know which details the player is going to pay attention to, creating an asymmetric burden
What if each advisor was granted a limited number of uses of a chess engine… Like 3 each per game. That could help the betrayers come up with a good betrayal when they thought the time was right. But the good advisor wouldn’t know that the bad one was choosing this move to user the chess engine on.