Correctness: V correctly predicts future reward in RL scenarios.
The meaning of correctness is very unclear to me on first reading. It later becomes apparent that “reward” is not intended to refer to the human-designed reward signal at all, due to the later assumption “deceptive misalignment”. So what does “correctness” say? “Correctness” indicates conforming to some standard; but here, the standard being conformed to is only the subjective standard of the misaligned AI itself.
This suggests the interpretation of “correctness” as “reflect the EV of the current state in terms of the misaligned desires”. However, this contradicts beta-coherence, since βt incorrectly predicts action probabilities.
I think it would be be better to remove the “correctness” assumption, since it doesn’t really do anything.
I think you’re right, correctness and beta-coherence can be rolled up into one specific property. I think I wrote down correctness as a constraint first, then tried to add coherence, but the specific property is that:
If s is terminal then [...] we just have V(s)=r(s).
If the probability of eventually encountering a terminal state is 1, then beta-coherence alone is inconsistent with deceptive misalignment, right? That’s because we can determine the value of V exactly from the reward function and the oracle, via backwards-induction. (I haven’t revisited RL convergence theorems in a while, I suspect I am not stating this quite right.) I mean, it is still consistent in the case where r is indifferent to the states encountered during training but wants some things in deployment (IE, r is inherently consistent with the provided definition of “deceptively misaligned”). However, it would be inconsistent for r that are not like that.
In other words: you cannot have inner-alignment problems if the outer objective is perfectly imposed. You can only have inner-alignment problems if there are important cases which your training procedure wasn’t able to check (eg, due to distributional shift, or scarcity of data). Perfect beta-coherence combined with a perfect oracle O rules this out.
I’m only referring to the reward constraint being satisfied for scenarios that are in the training distribution, since this maths is entirely applied to a decision taking place in training. Therefore I don’t think distributional shift applies.
The meaning of correctness is very unclear to me on first reading. It later becomes apparent that “reward” is not intended to refer to the human-designed reward signal at all, due to the later assumption “deceptive misalignment”. So what does “correctness” say? “Correctness” indicates conforming to some standard; but here, the standard being conformed to is only the subjective standard of the misaligned AI itself.
This suggests the interpretation of “correctness” as “reflect the EV of the current state in terms of the misaligned desires”. However, this contradicts beta-coherence, since βt incorrectly predicts action probabilities.
I think it would be be better to remove the “correctness” assumption, since it doesn’t really do anything.
I think you’re right, correctness and beta-coherence can be rolled up into one specific property. I think I wrote down correctness as a constraint first, then tried to add coherence, but the specific property is that:
Which captures both. I will edit the post to clarify this when I get time.
If the probability of eventually encountering a terminal state is 1, then beta-coherence alone is inconsistent with deceptive misalignment, right? That’s because we can determine the value of V exactly from the reward function and the oracle, via backwards-induction. (I haven’t revisited RL convergence theorems in a while, I suspect I am not stating this quite right.) I mean, it is still consistent in the case where r is indifferent to the states encountered during training but wants some things in deployment (IE, r is inherently consistent with the provided definition of “deceptively misaligned”). However, it would be inconsistent for r that are not like that.
In other words: you cannot have inner-alignment problems if the outer objective is perfectly imposed. You can only have inner-alignment problems if there are important cases which your training procedure wasn’t able to check (eg, due to distributional shift, or scarcity of data). Perfect beta-coherence combined with a perfect oracle O rules this out.
I’m only referring to the reward constraint being satisfied for scenarios that are in the training distribution, since this maths is entirely applied to a decision taking place in training. Therefore I don’t think distributional shift applies.
Is that rolling up two things into one, or is that just beta-coherence?