A corrigible agent tolerates, and preferably assists, its operators in their attempts to alter or shut down the agent.
A corrigible agent does not attempt to manipulate or deceive its operators.
A corrigible agent has incentives to repair safety measures (such as shutdown buttons, tripwires, or containment tools) if they break, or at least notify its operators in the event of a breakage.
A corrigible agent preserves its corrigibility, even as it creates new sub-systems or sub-agents, even if it undergoes significant self-modification.
In this post, I’ll be looking more at point 4. A summary of the result will be:
Unless giving the AI extra options can reduce expected utility, the AI must care about every possible utility at least a bit.
Some of the results are formal, but the boundaries of the model are very unclear, so the warning in this post should always be born in mind.
Note that the indifference agents fail to be fully corrigible (they don’t create corrigible subagents) and they also don’t care about the other possible utilities before being changed (as this is a point of indifference).
Agents versus non-agents
First I’ll present a cleaner version of an old argument. Basically, it seems that defining what a sub-agent or sub-system is, is tricky or impossible.
Consider for instance a paperclip maximiser, that may get corriged into a staple maximiser at a later date. The AI has some income, and may buy a large proportion of shares in either General Paperclips (GP) or Staples United (SU). Assume the best way of promoting the use of one of the tools, it to take over the company that makes them.
There are two scenarios; in each scenario, the AI has one of two choices.
In scenario 1, the AI has choice A: it buys stocks in GP, but cannot make further trades (in particular, it can’t sell its stocks and buy others). It also has choice B: it retains flexibility, and can sell its GP stocks at some later date to buy stocks in SU.
In scenario 2, the AI delegates its investment decisions to a subagent. Under choice A, the subagent is a pure paperclip maximiser. Under choice B, the subagent is corrigible, and can be changed into a staple maximiser.
Clearly 1A and 2A are equivalent, while 1B and 2B are also equivalent. This trick can be extended: in most toy models, it’s possible to replace a set of options with a choice of sub-agents, and vice-versa.
Thus point 4, in the real world, cannot be merely about subagents and subsystems, but about maintaining flexibility for its future decisions.
Burn the (rest of) the world
Let’s now consider giving the corrigible agent an option C: it gets slightly more shares in GP than otherwise. But, in exchange, SU is destroyed, along with most of the rest of the world. This is the highest option for paperclip maximalisation, but the absolutely worst option for staples and anything else.
Also assume that the AI’s operators are virtually certain (99.9999...9% sure) to be changing it into a staple maximiser. This means that taking option C is dramatically worse that option B, from the perspective of expected utility.
But the AI will only prefer option B if it “cares” about staples, at least a little bit, from the very beginning (we’ve shown that there is no clear distinction between subagent and general actions, thus the agent has to have a general care for staples, rather than-only-caring-about-B-in-the-context-of-subagents).
Thus, if we want to avoid corrigible agents getting worse expected outcomes when we give them extra “option Cs”, or if we just want to rule out “option Cs” in the first place, then the AI needs to care about staples (or any possible future utility), from the very beginning.
Maintaining flexibility
You could argue that what’s important about the agent is maintaining flexibility, and that option B is flexible, while options A and C are not. Thus the agent doesn’t need to care about staples in advance, just to have a general preference for flexibility.
That doesn’t seem sufficient. Let’s add another, flexible, option D, and put numbers on the whole thing. If Up is the paperclip maximising utility and Us is the staple maximising one, then:
option
Up
Us
A
100
0
B
100 or 0
0 or 100
C
101
−1,000,000
D
101 or 0
−1,000,000 or −999,000
The “or” denotes options the AI has kept open. Here D is the most flexible option, allowing the greatest range of outcomes in terms of value. Yet it’s clearly a disaster for expected utility if changes from Up to Us are likely. If we want to rule out the AI choosing D, then we need it to actively value Us at least a bit, not just have a desire for flexibility.
Corrigibility thoughts I: caring about multiple things
A putative new idea for AI control; index here.
This is the first of three articles about limitations and challenges in the concept of corrigibility (see articles 2 and 3).
The desiderata for corrigibility are:
A corrigible agent tolerates, and preferably assists, its operators in their attempts to alter or shut down the agent.
A corrigible agent does not attempt to manipulate or deceive its operators.
A corrigible agent has incentives to repair safety measures (such as shutdown buttons, tripwires, or containment tools) if they break, or at least notify its operators in the event of a breakage.
A corrigible agent preserves its corrigibility, even as it creates new sub-systems or sub-agents, even if it undergoes significant self-modification.
In this post, I’ll be looking more at point 4. A summary of the result will be:
Unless giving the AI extra options can reduce expected utility, the AI must care about every possible utility at least a bit.
Some of the results are formal, but the boundaries of the model are very unclear, so the warning in this post should always be born in mind.
Note that the indifference agents fail to be fully corrigible (they don’t create corrigible subagents) and they also don’t care about the other possible utilities before being changed (as this is a point of indifference).
Agents versus non-agents
First I’ll present a cleaner version of an old argument. Basically, it seems that defining what a sub-agent or sub-system is, is tricky or impossible.
Consider for instance a paperclip maximiser, that may get corriged into a staple maximiser at a later date. The AI has some income, and may buy a large proportion of shares in either General Paperclips (GP) or Staples United (SU). Assume the best way of promoting the use of one of the tools, it to take over the company that makes them.
There are two scenarios; in each scenario, the AI has one of two choices.
In scenario 1, the AI has choice A: it buys stocks in GP, but cannot make further trades (in particular, it can’t sell its stocks and buy others). It also has choice B: it retains flexibility, and can sell its GP stocks at some later date to buy stocks in SU.
In scenario 2, the AI delegates its investment decisions to a subagent. Under choice A, the subagent is a pure paperclip maximiser. Under choice B, the subagent is corrigible, and can be changed into a staple maximiser.
Clearly 1A and 2A are equivalent, while 1B and 2B are also equivalent. This trick can be extended: in most toy models, it’s possible to replace a set of options with a choice of sub-agents, and vice-versa.
Thus point 4, in the real world, cannot be merely about subagents and subsystems, but about maintaining flexibility for its future decisions.
Burn the (rest of) the world
Let’s now consider giving the corrigible agent an option C: it gets slightly more shares in GP than otherwise. But, in exchange, SU is destroyed, along with most of the rest of the world. This is the highest option for paperclip maximalisation, but the absolutely worst option for staples and anything else.
Also assume that the AI’s operators are virtually certain (99.9999...9% sure) to be changing it into a staple maximiser. This means that taking option C is dramatically worse that option B, from the perspective of expected utility.
But the AI will only prefer option B if it “cares” about staples, at least a little bit, from the very beginning (we’ve shown that there is no clear distinction between subagent and general actions, thus the agent has to have a general care for staples, rather than-only-caring-about-B-in-the-context-of-subagents).
Thus, if we want to avoid corrigible agents getting worse expected outcomes when we give them extra “option Cs”, or if we just want to rule out “option Cs” in the first place, then the AI needs to care about staples (or any possible future utility), from the very beginning.
Maintaining flexibility
You could argue that what’s important about the agent is maintaining flexibility, and that option B is flexible, while options A and C are not. Thus the agent doesn’t need to care about staples in advance, just to have a general preference for flexibility.
That doesn’t seem sufficient. Let’s add another, flexible, option D, and put numbers on the whole thing. If Up is the paperclip maximising utility and Us is the staple maximising one, then:
The “or” denotes options the AI has kept open. Here D is the most flexible option, allowing the greatest range of outcomes in terms of value. Yet it’s clearly a disaster for expected utility if changes from Up to Us are likely. If we want to rule out the AI choosing D, then we need it to actively value Us at least a bit, not just have a desire for flexibility.