I like this analogy, but there are a couple of features that I think make it hard to think about:
1. The human wants to play, not just to win. You stipulated that “the human aims to win, and instructs their AI teammate to prioritise winning above all else”. The dilemma then arises because the aim to win cuts against the human having agency and control. Your takeaway is “Even perfectly aligned systems, genuinely pursuing human goals, might naturally evolve to restrict human agency.”
So in this analogy, it seems that “winning” stands for the human’s true goals. But (as you acknowledge) it seems like the human doesn’t just want to win, but actually wants both some “winning” and some “agency”. You’ve implicitly tried to factor the entirety of the human’s goals into the outcome of the game, but you have left some of the agency behind, outside of this objective, and this is what creates the dilemma.
For an AI system that is truly ‘perfectly aligned’—truly pursuing the human’s goals, it seems like either
(A) the AI partner would not pursue winning above all else, but would allow some human control at the cost of some ‘winning’, or
(B) if it were possible to actually factor the human’s meta-preference for having agency into ‘winning’, then we shouldn’t care if the AI plays to win above all else, because that already accounts for the human’s desired amount of agency.
For an AI system not perfectly aligned, this becomes a different game (in the sense of game theory). It’s a three player game between the AI partner, the human partner, and the opponent, each of which have different objectives (the difference between the AI and human partners is that the human wants some combination of ‘winning’ and ‘agency’ while the AI just wants ‘winning’; probably the opponent just wants both of them to lose). One interesting dynamic that could then arise is that the human partner could threaten and punish the AI partner by making worse moves than the best moves they can see if the AI doesn’t give them enough control. To stop the human from doing this, the AI either has to
(C) negotiate to give the human some control, or
(D) remove all control from the human (e.g. force the queen to have no bad moves or no moves at all).
In particular, (D) seems like it would be expensive for the AI partner as it requires playing without the queen (against an opponent with no such restriction), so maybe the AI will let the human play sometimes.
2. I don’t think it needs to be a stochastic chess variant. The game is set up so that the human gets to play whenever they roll a 6 on a (presumably six-sided) die. You said this stands in for the idea that in the real world, the AI system makes decisions on a faster timescale than the human. But this particular mechanism of implementing the speed differential as a game mechanism comes at the cost of making the chess variant stochastic. I think that determinism is an important feature of standard chess. In theory, you can solve chess with an adversarial look-ahead search, mini-max, alpha-beta pruning, etc. But as soon as the dice becomes involved, all of the players involved have to switch to expecti-mini-max. Rolling a six can suddenly throw off the tempo in your delicate exchange or your whirlwind manoeuvre. Etc.
I’m a novice at chess, so it’s not like this is going to make a difference to how I think about the analogy (I will struggle to think strategically in both cases). And maybe a sufficiently accomplished chess player is familiar with stochastic variants already. But for someone in-between who is familiar with deterministic chess, maybe it’s easier to consider a non-stochastic variant of the chess game, for example where the human gets the option to play every 6 turns (deterministically), which gives the same speed differential in expectation.
I like this analogy, but there are a couple of features that I think make it hard to think about:
1. The human wants to play, not just to win. You stipulated that “the human aims to win, and instructs their AI teammate to prioritise winning above all else”. The dilemma then arises because the aim to win cuts against the human having agency and control. Your takeaway is “Even perfectly aligned systems, genuinely pursuing human goals, might naturally evolve to restrict human agency.”
So in this analogy, it seems that “winning” stands for the human’s true goals. But (as you acknowledge) it seems like the human doesn’t just want to win, but actually wants both some “winning” and some “agency”. You’ve implicitly tried to factor the entirety of the human’s goals into the outcome of the game, but you have left some of the agency behind, outside of this objective, and this is what creates the dilemma.
For an AI system that is truly ‘perfectly aligned’—truly pursuing the human’s goals, it seems like either
(A) the AI partner would not pursue winning above all else, but would allow some human control at the cost of some ‘winning’, or
(B) if it were possible to actually factor the human’s meta-preference for having agency into ‘winning’, then we shouldn’t care if the AI plays to win above all else, because that already accounts for the human’s desired amount of agency.
For an AI system not perfectly aligned, this becomes a different game (in the sense of game theory). It’s a three player game between the AI partner, the human partner, and the opponent, each of which have different objectives (the difference between the AI and human partners is that the human wants some combination of ‘winning’ and ‘agency’ while the AI just wants ‘winning’; probably the opponent just wants both of them to lose). One interesting dynamic that could then arise is that the human partner could threaten and punish the AI partner by making worse moves than the best moves they can see if the AI doesn’t give them enough control. To stop the human from doing this, the AI either has to
(C) negotiate to give the human some control, or
(D) remove all control from the human (e.g. force the queen to have no bad moves or no moves at all).
In particular, (D) seems like it would be expensive for the AI partner as it requires playing without the queen (against an opponent with no such restriction), so maybe the AI will let the human play sometimes.
2. I don’t think it needs to be a stochastic chess variant. The game is set up so that the human gets to play whenever they roll a 6 on a (presumably six-sided) die. You said this stands in for the idea that in the real world, the AI system makes decisions on a faster timescale than the human. But this particular mechanism of implementing the speed differential as a game mechanism comes at the cost of making the chess variant stochastic. I think that determinism is an important feature of standard chess. In theory, you can solve chess with an adversarial look-ahead search, mini-max, alpha-beta pruning, etc. But as soon as the dice becomes involved, all of the players involved have to switch to expecti-mini-max. Rolling a six can suddenly throw off the tempo in your delicate exchange or your whirlwind manoeuvre. Etc.
I’m a novice at chess, so it’s not like this is going to make a difference to how I think about the analogy (I will struggle to think strategically in both cases). And maybe a sufficiently accomplished chess player is familiar with stochastic variants already. But for someone in-between who is familiar with deterministic chess, maybe it’s easier to consider a non-stochastic variant of the chess game, for example where the human gets the option to play every 6 turns (deterministically), which gives the same speed differential in expectation.