I think, to get P(B=b), you have to have an implicit policy for the entire rest of the game (not just the next action a′).
I like the idea of using the evidence you have so far to inform the P(b), so you spend more effort on looking at the shutdown button if you expect shutdown might be imminent based on your evidence. Of course, you can combine this with the fixed point thing, so the distribution of a′ is the same as the distribution of a.
My main concern is that this isn’t reflectively stable. If at an early time step the AI has a certain P(b) distribution, it may want to modify into an agent that fixes this as the correct P(b) rather than changing P(b) in response to new evidence; this is because it is modelling B as coming independently from P(b).
I think, to get P(B=b), you have to have an implicit policy for the entire rest of the game (not just the next action a′).
I like the idea of using the evidence you have so far to inform the P(b), so you spend more effort on looking at the shutdown button if you expect shutdown might be imminent based on your evidence. Of course, you can combine this with the fixed point thing, so the distribution of a′ is the same as the distribution of a.
My main concern is that this isn’t reflectively stable. If at an early time step the AI has a certain P(b) distribution, it may want to modify into an agent that fixes this as the correct P(b) rather than changing P(b) in response to new evidence; this is because it is modelling B as coming independently from P(b).