I’m not too surprised to hear you had already discovered this idea, since I’m familiar with the gap between research and writing speed. As someone who is not involved with MIRI, consideration of some FAI-related problems is at least somewhat disincentivized by the likelihood that MIRI already has an answer.
As for flaws, I’ll list what I can think of. First of all, there are of course some obvious design difficulties, including the difficulty of designing US in the first place, and the difficulty of choosing the appropriate way of scaling US, but those seem to be resolvable.
One point that occurs to me under the assumptions of the toy model is that decisions involving larger differences in values of UN are at the same time more dangerous and more likely to outweigh the agent’s valuation of its future corrigibility. Moreover, simply increasing the scaling of US to compensate would cause US to significantly outweigh UN in the context of smaller decisions.
An example would be that the AI decides it’s crucial to take over the world in order to “save” it, so it starts building an army of subagents to do it, and it decides that building corrigibility into those subagents is not worth the associated risk of failure.
However, it appears that this problem can still be solved by designing US correctly in the first place; a well-designed US should clearly assign greater negative weighting to larger-scale corrigibility failures than to smaller scale ones.
There’s two other questions that I can see that relate to scaling up the toy model.
How does this model extend past the three-timestep toy scenario?
Does the model remain stable under assumptions of bounded computational power? In more complex scenarios there are obvious questions of “tiling”, but I think there is a more basic issue to answer that applies even in the three-timestep case. That is, if the agent will not be able to calculate the counterfactual utility values E[U | do(.)] exactly, can we make sure that the agent’s process of estimation will avoid making systematic errors that result in pathological behaviour?
As someone who is not involved with MIRI, consideration of some FAI-related problems is at least somewhat disincentivized by the likelihood that MIRI already has an answer.
Yeah, sorry about that—we are taking some actions to close the writing/research gap and make it easier for people to contribute fresh results, but it will take time for those to come to fruition. In the interim, all I can provide is LW karma and textual reinforcement. Nice work!
(We are in new territory now, FWIW.)
I agree with these concerns; specifying US is really hard and making it interact nicely with UN is also hard.
How does this model extend past the three-timestep toy scenario?
Roughly, you add correction terms f1(a1), f2(a1, o1, a2), etc. for every partial history, where each one is defined as E[Ux|A1=a1, O1=o1, …, do(On rel Press)]. (I think.)
Does the model remain stable under assumptions of bounded computational power?
Things are certainly difficult, and the dependence upon this particular agent’s expectations is indeed weird/brittle. (For example, consider another agent maximizing this utility function, where the expectations are the first agent’s expectations. Now it’s probably incentivized to exploit places where the first agent’s expectations are known to be incorrect, although I haven’t the time right now to figure out exactly how.) This seems like potentially a good place to keep poking.
That’s definitely a more elegant presentation.
I’m not too surprised to hear you had already discovered this idea, since I’m familiar with the gap between research and writing speed. As someone who is not involved with MIRI, consideration of some FAI-related problems is at least somewhat disincentivized by the likelihood that MIRI already has an answer.
As for flaws, I’ll list what I can think of. First of all, there are of course some obvious design difficulties, including the difficulty of designing US in the first place, and the difficulty of choosing the appropriate way of scaling US, but those seem to be resolvable.
One point that occurs to me under the assumptions of the toy model is that decisions involving larger differences in values of UN are at the same time more dangerous and more likely to outweigh the agent’s valuation of its future corrigibility. Moreover, simply increasing the scaling of US to compensate would cause US to significantly outweigh UN in the context of smaller decisions.
An example would be that the AI decides it’s crucial to take over the world in order to “save” it, so it starts building an army of subagents to do it, and it decides that building corrigibility into those subagents is not worth the associated risk of failure.
However, it appears that this problem can still be solved by designing US correctly in the first place; a well-designed US should clearly assign greater negative weighting to larger-scale corrigibility failures than to smaller scale ones.
There’s two other questions that I can see that relate to scaling up the toy model.
How does this model extend past the three-timestep toy scenario?
Does the model remain stable under assumptions of bounded computational power? In more complex scenarios there are obvious questions of “tiling”, but I think there is a more basic issue to answer that applies even in the three-timestep case. That is, if the agent will not be able to calculate the counterfactual utility values E[U | do(.)] exactly, can we make sure that the agent’s process of estimation will avoid making systematic errors that result in pathological behaviour?
Yeah, sorry about that—we are taking some actions to close the writing/research gap and make it easier for people to contribute fresh results, but it will take time for those to come to fruition. In the interim, all I can provide is LW karma and textual reinforcement. Nice work!
(We are in new territory now, FWIW.)
I agree with these concerns; specifying US is really hard and making it interact nicely with UN is also hard.
Roughly, you add correction terms f1(a1), f2(a1, o1, a2), etc. for every partial history, where each one is defined as E[Ux|A1=a1, O1=o1, …, do(On rel Press)]. (I think.)
Things are certainly difficult, and the dependence upon this particular agent’s expectations is indeed weird/brittle. (For example, consider another agent maximizing this utility function, where the expectations are the first agent’s expectations. Now it’s probably incentivized to exploit places where the first agent’s expectations are known to be incorrect, although I haven’t the time right now to figure out exactly how.) This seems like potentially a good place to keep poking.