Oh, and as an aside a practical experiment I ran back in the day by accident: I played in a series of Diplomacy games where there was common knowledge that if I ever broke my word on anything all the other players would gang up on me, and I still won or was in a 2-way draw (out of 6-7 players) most of the time. If you have a sufficient tactical and strategic advantage (aka are sufficiently in-context smarter) then a lie detector won’t stop you.
I’m not sure this is evidence for what you’re using it for? Giving up the ability to lie is a disadvantage, but you did get in exchange the ability to be trusted, which is a possibly-larger advantage—there are moves which are powerful but leave you open to backstabbing; other alliances can’t take those moves and yours can.
Agreed, but that was the point I was trying to make. If you take away the AI’s ability to lie, it gains the advantage that you believe what it says, that it is credible. That is especially dangerous when the AI can make credible threats (which potentially include threats to create simulations, but simpler things work too) and also credible promises if only you’d be so kind as to [whatever helps the AI get what it wants.]
I’m not sure this is evidence for what you’re using it for? Giving up the ability to lie is a disadvantage, but you did get in exchange the ability to be trusted, which is a possibly-larger advantage—there are moves which are powerful but leave you open to backstabbing; other alliances can’t take those moves and yours can.
Agreed, but that was the point I was trying to make. If you take away the AI’s ability to lie, it gains the advantage that you believe what it says, that it is credible. That is especially dangerous when the AI can make credible threats (which potentially include threats to create simulations, but simpler things work too) and also credible promises if only you’d be so kind as to [whatever helps the AI get what it wants.]