When I’m thinking about AI control in the context of concentrated failures—that is, cases where an unacceptable outcome might be caused by a small number of bad AI actions, e.g. the setting explored in our control paper—I find it helpful to use the following simplified models.
Let’s consider our AI deployment from both a micro perspective, where we just think about a single task the AI is doing, and a macro perspective, where we think about the deployment as comprised of many tasks.
From the micro perspective, when an AI tries to attack on a particular task, this can end in three ways:
The attack succeeds, and so the AI wins.
We catch it the AI red-handed before the attack completes, and so it loses.
The attack fails silently; we don’t notice it, and so the game continues as before.
And from the macro perspective, the game ends one of these ways:
An attack succeeded; the AI wins.
You caught the AI; you win.
The game finished naturally because all the tasks were completed; you win.
I often ignore the possibility of the game finishing naturally, because that won’t happen if the AI attempts to attack often enough, which is the right strategy for the AI in many games.
Under that assumption, the probability of AI victory is the ratio between the probability that an attack ends in success and the probability that it ends with us catching the AI—the probability that the game continues doesn’t matter, because if the game continues, the AI will eventually attack again on a later task.
I also often think about cases where you need to catch the AI some larger number of times (let’s say 30 times) before you win; you can approximate this by reducing P(catch) by 30x and increasing P(silent failure) to compensate.
Possibly misguided question given the context—I see you incorporating imperfect information in “the attack fails silently”, why not also a distinction between “the attack succeeds noisily, the AI wins and we know it won” and “the attack succeeds silently, the AI wins and we don’t know it won” ?
When I’m thinking about AI control in the context of concentrated failures—that is, cases where an unacceptable outcome might be caused by a small number of bad AI actions, e.g. the setting explored in our control paper—I find it helpful to use the following simplified models.
Let’s consider our AI deployment from both a micro perspective, where we just think about a single task the AI is doing, and a macro perspective, where we think about the deployment as comprised of many tasks.
From the micro perspective, when an AI tries to attack on a particular task, this can end in three ways:
The attack succeeds, and so the AI wins.
We catch it the AI red-handed before the attack completes, and so it loses.
The attack fails silently; we don’t notice it, and so the game continues as before.
And from the macro perspective, the game ends one of these ways:
An attack succeeded; the AI wins.
You caught the AI; you win.
The game finished naturally because all the tasks were completed; you win.
I often ignore the possibility of the game finishing naturally, because that won’t happen if the AI attempts to attack often enough, which is the right strategy for the AI in many games.
Under that assumption, the probability of AI victory is the ratio between the probability that an attack ends in success and the probability that it ends with us catching the AI—the probability that the game continues doesn’t matter, because if the game continues, the AI will eventually attack again on a later task.
I also often think about cases where you need to catch the AI some larger number of times (let’s say 30 times) before you win; you can approximate this by reducing P(catch) by 30x and increasing P(silent failure) to compensate.
Possibly misguided question given the context—I see you incorporating imperfect information in “the attack fails silently”, why not also a distinction between “the attack succeeds noisily, the AI wins and we know it won” and “the attack succeeds silently, the AI wins and we don’t know it won” ?
My reasoning is: “If the AI won, who cares if we know it or not? We’ll find out eventually :/“
This isn’t totally correct but it seems like a fine approximation given all the other approximations here.
Fair enough!