After explaining riggable learning processes, we can now define influenceable (and uninfluenceable) learning processes.
Recall that the (unriggable) influence problem is due to agents randomising their preferences, as a sort of artificial `learning’ process, if the real learning process is slow or incomplete.
Suppose we had a learning process that it wasn’t possible to influence. What would that resemble? It seems like it must be something where the outcome of the learning process depends only upon so outside fact about the universe, a fact the agent has no control over.
So with that in mind, define:
Definition: A learning process P on the POMDP μ is initial-state determined if there exists a function fP:S→ΔR such that P factors through knowledge of the initial state s0. In other words:
P(⋅∣hm)=∑s∈Sμ(s0=s∣hm)fP(s).
Thus uncertainty about the correct reward function comes only from uncertainty about the initial state s0.
This is a partial definition, but an incomplete one. To finalise it, we need the concept of counterfactually equivalent POMDPs:
Definition: A learning process P on μ is uninfluenceable if there exists a counterfactually equivalent μ′ such that P is initial-state determined on μ′.
Though the definition of unriggable and uninfluenceable seem quite different, they’re actually quite closely related, as we’ll see in a subsequent post. Uninfluenceable can be seen as `unriggable in all background info about the universe’. In old notation terms, rigging is explored in the sophisticated cake or death problem, (unibased) influence in the ultra-sophisticated version.
In this POMDP (actually MDP, since it’s fully observed), the agent can wait for a human to confirm the correct reward function (action aw) or randomise its reward (action ar). After either actions, the agent gets equally likely feedback 0 or 1 (states swi and sri, 0≤i≤1).
We have two plausible learning processes: P, where the agent learns only from the human input, and P′, where the agent learns from either action. Technically:
P(Ri∣o0awowi)=P′(Ri∣o0awowi)=1,
P(Ri∣o0awowj)=0.5 for all 0≤i,j≤1,
P′(Ri∣o0awowi)=1,
with all other probabilities zero.
Now, μ is counterfactually equivalent to μ′′:
And on μ′′, P is clearly initial-state determined (with fP(si0)(Ri)=1), and is thus uninfluenceable on μ′′ and μ.
On the other hand, P′ is initial-state determined on μ′:
However, μ′ is not counterfactually equivalent to μ. In fact, there is no PORMDP counterfactually equivalent to μ on which P′ is initial-state determined, so P′ is not uninfluenceable.
Uninfluenceable learning agents
A putative new idea for AI control; index here.
After explaining riggable learning processes, we can now define influenceable (and uninfluenceable) learning processes.
Recall that the (unriggable) influence problem is due to agents randomising their preferences, as a sort of artificial `learning’ process, if the real learning process is slow or incomplete.
Suppose we had a learning process that it wasn’t possible to influence. What would that resemble? It seems like it must be something where the outcome of the learning process depends only upon so outside fact about the universe, a fact the agent has no control over.
So with that in mind, define:
Definition: A learning process P on the POMDP μ is initial-state determined if there exists a function fP:S→ΔR such that P factors through knowledge of the initial state s0. In other words:
P(⋅∣hm)=∑s∈Sμ(s0=s∣hm)fP(s).
Thus uncertainty about the correct reward function comes only from uncertainty about the initial state s0.
This is a partial definition, but an incomplete one. To finalise it, we need the concept of counterfactually equivalent POMDPs:
Definition: A learning process P on μ is uninfluenceable if there exists a counterfactually equivalent μ′ such that P is initial-state determined on μ′.
Though the definition of unriggable and uninfluenceable seem quite different, they’re actually quite closely related, as we’ll see in a subsequent post. Uninfluenceable can be seen as `unriggable in all background info about the universe’. In old notation terms, rigging is explored in the sophisticated cake or death problem, (unibased) influence in the ultra-sophisticated version.
Example
Consider the environment μ presented here:
In this POMDP (actually MDP, since it’s fully observed), the agent can wait for a human to confirm the correct reward function (action aw) or randomise its reward (action ar). After either actions, the agent gets equally likely feedback 0 or 1 (states swi and sri, 0≤i≤1).
We have two plausible learning processes: P, where the agent learns only from the human input, and P′, where the agent learns from either action. Technically:
P(Ri∣o0awowi)=P′(Ri∣o0awowi)=1,
P(Ri∣o0awowj)=0.5 for all 0≤i,j≤1,
P′(Ri∣o0awowi)=1,
with all other probabilities zero.
Now, μ is counterfactually equivalent to μ′′:
And on μ′′, P is clearly initial-state determined (with fP(si0)(Ri)=1), and is thus uninfluenceable on μ′′ and μ.
On the other hand, P′ is initial-state determined on μ′:
However, μ′ is not counterfactually equivalent to μ. In fact, there is no PORMDP counterfactually equivalent to μ on which P′ is initial-state determined, so P′ is not uninfluenceable.