In that case, one strategy the EAI might employ is to allow the FAI to increase its utility to an arbitrarily high level before threatening to take it away. In this way, it can simulate an arbitrarily large disutility even if the utility function is bounded below. Of course, a high utility might improve the FAI’s position to resist the EAI’s threats.
In this scenario, it is also possible that the FAI, anticipating the EAI’s future threat against it, might calculate its expected utility differently. For example, if it deduces that the EAI is waiting until some utility threshold to make its threat, it might limit its own utility growth at some if it found the threat credible to avoid triggering it.
This seems a lot like the human cognitive bias of loss aversion; I wonder if AGIs would (or should) suffer from something similar.
As an observation, it seems like part of the problem in this example is that the agent has access to different actions than the supervisor. The supervisor cannot move to s2 (and therefore cannot provide any information about the reward difference, as noted), but the agent can easily do so. If this were not the case, it would not matter what the agent believed about s2.
What happens in scenarios where you restrict the set of actions available to the agent so that it matches those available to the supervisor?