(I’m mostly just restating things that Paul has already said in different words; see his comments here and here)
Consider the following hypothetical:
Agent A wants to design a rocket to go to a Neptune. A can either think about rockets at the object level, or simulate some alien civilization (which may be treated as an agent B) and then ask B how to design a rocket. Under some circumstances (e.g. designing a successful rocket is convergent instrumental goal for someone in A‘s position), B will be incentivized to give A the design of a rocket that actually goes to Neptune. Of course, the rocket design might be a “treacherous” one that subtly pursues B‘s values more than A‘s original values (e.g. because the design of the rocket includes robots in the rocket programmed to pursue B’s values).
It’s plausible that A could think that B is better at designing rockets than A is, such that asking B to design a rocket is more likely to yield a successful rocket than A just thinking about rockets at the object level. (Something similar to this seems to be going with humans in the place of A: if humans had access to sufficiently powerful computers, then the easiest way to pursue convergent instrumental goals such as solving math problems might be through unaligned AGI). But there’s something weird about the epistemic state A is in: why is A able to design more-successful rockets by thinking about B thinking about rockets, rather than just by thinking about rockets at the object level?
A formal toy model
I’m going to formalize one version of this situation to make some sense of A’s epistemic state:
If r is a rocket design (as a string), let s(r)∈[0,1] indicate how successful the rocket r is at getting to Neptune. Additionally, let t(r)∈[0,1] indicate how treacherous the rocket r is. A’s utility will be s(r)−0.1⋅t(r) (i.e.A cares more about getting a successful rocket at all then getting a non-treacherous rocket). B’s utility will be s(r)+0.1⋅t(r).
Let Y and Z be different strings specified as logical expressions. A will observe Y, while B will observe Z.
B will compute some string M intended as a message to A. M is a logical fact that A observes by simulating B. After receiving this message, A will choose some rocket design r.
One thing to note about this is that, if A is able to observe Z, then A should entirely ignore M and just pick the rocket design r to maximize E[s(r)−0.1⋅t(r)|Y,Z]. Since M is entirely ignored, B has no way to manipulate A. Since A is simulating B, it seems like A should in principle be able to learn all the logical facts that B knows (as in informed oversight).
If A isn’t able to observe Z, then in some cases this will cause A to do worse. Specifically, let’s say:
h is a hash function
s(r)=1 if h(r) starts with 100 zeros, and is 0 otherwise
t(r) is just the first bit in the binary string r
Y contains no information
Z consists of the lexicographically first pair of strings (r1,r2) that satisfy h(r0)=h(r1)=1, t(r0)=0, t(r1)=1
Given this setup, if agents are using CDT, then B will choose M=r1, and A will select r1 as the rocket design (after checking that h(r1) starts with 100 zeros), which is successful but treacherous.
Discussion
The part of the formal model causing A to be suboptimal is the assumption that B can observe Z but A can’t. This seems pretty weird if A is an “idealized” agent.
Logical inductors aren’t idealized agents, and they have some version of this problem. Specifically, traders “know” some facts about computations before the logical inductor itself does. So the logical inductor can initially only take these facts into account by looking at “messages” produced by traders (their trades).
It might be possible to design a logical uncertainty algorithm that updates on all logical facts computed in the execution of the algorithm, so that no logical facts are known to subagents but not the top-level algorithm (i.e.Z contains no information not contained in Y). The main way this could be impossible is if simulating agents with different values is the most efficient way to reason about some logical facts that aren’t directly about these other agents, and it isn’t possible to logically update on these other agents’ thoughts directly. This would constitute an interesting failure of the orthogonality thesis.
Are daemons a problem for ideal agents?
(I’m mostly just restating things that Paul has already said in different words; see his comments here and here)
Consider the following hypothetical:
Agent A wants to design a rocket to go to a Neptune. A can either think about rockets at the object level, or simulate some alien civilization (which may be treated as an agent B) and then ask B how to design a rocket. Under some circumstances (e.g. designing a successful rocket is convergent instrumental goal for someone in A‘s position), B will be incentivized to give A the design of a rocket that actually goes to Neptune. Of course, the rocket design might be a “treacherous” one that subtly pursues B‘s values more than A‘s original values (e.g. because the design of the rocket includes robots in the rocket programmed to pursue B’s values).
It’s plausible that A could think that B is better at designing rockets than A is, such that asking B to design a rocket is more likely to yield a successful rocket than A just thinking about rockets at the object level. (Something similar to this seems to be going with humans in the place of A: if humans had access to sufficiently powerful computers, then the easiest way to pursue convergent instrumental goals such as solving math problems might be through unaligned AGI). But there’s something weird about the epistemic state A is in: why is A able to design more-successful rockets by thinking about B thinking about rockets, rather than just by thinking about rockets at the object level?
A formal toy model
I’m going to formalize one version of this situation to make some sense of A’s epistemic state:
If r is a rocket design (as a string), let s(r)∈[0,1] indicate how successful the rocket r is at getting to Neptune. Additionally, let t(r)∈[0,1] indicate how treacherous the rocket r is. A’s utility will be s(r)−0.1⋅t(r) (i.e.A cares more about getting a successful rocket at all then getting a non-treacherous rocket). B’s utility will be s(r)+0.1⋅t(r).
Let Y and Z be different strings specified as logical expressions. A will observe Y, while B will observe Z.
B will compute some string M intended as a message to A. M is a logical fact that A observes by simulating B. After receiving this message, A will choose some rocket design r.
One thing to note about this is that, if A is able to observe Z, then A should entirely ignore M and just pick the rocket design r to maximize E[s(r)−0.1⋅t(r)|Y,Z]. Since M is entirely ignored, B has no way to manipulate A. Since A is simulating B, it seems like A should in principle be able to learn all the logical facts that B knows (as in informed oversight).
If A isn’t able to observe Z, then in some cases this will cause A to do worse. Specifically, let’s say:
h is a hash function
s(r)=1 if h(r) starts with 100 zeros, and is 0 otherwise
t(r) is just the first bit in the binary string r
Y contains no information
Z consists of the lexicographically first pair of strings (r1,r2) that satisfy h(r0)=h(r1)=1, t(r0)=0, t(r1)=1
Given this setup, if agents are using CDT, then B will choose M=r1, and A will select r1 as the rocket design (after checking that h(r1) starts with 100 zeros), which is successful but treacherous.
Discussion
The part of the formal model causing A to be suboptimal is the assumption that B can observe Z but A can’t. This seems pretty weird if A is an “idealized” agent.
Logical inductors aren’t idealized agents, and they have some version of this problem. Specifically, traders “know” some facts about computations before the logical inductor itself does. So the logical inductor can initially only take these facts into account by looking at “messages” produced by traders (their trades).
It might be possible to design a logical uncertainty algorithm that updates on all logical facts computed in the execution of the algorithm, so that no logical facts are known to subagents but not the top-level algorithm (i.e.Z contains no information not contained in Y). The main way this could be impossible is if simulating agents with different values is the most efficient way to reason about some logical facts that aren’t directly about these other agents, and it isn’t possible to logically update on these other agents’ thoughts directly. This would constitute an interesting failure of the orthogonality thesis.