It seems like a big input into P(AI takeover) is the extent to which instances of our AI are inclined to cooperate with each other; specifically, the extent to which they’re willing to sacrifice overseer approval at the thing they’re currently doing in return for causing a different instance to get more overseer approval. (I’m scared of this because if they’re willing to sacrifice approval in return for a different instance getting approval, then I’m way more scared of them colluding with each other to fool oversight processes or subvert red-teaming procedures, or coordinating coups.)
If the AIs never interact with each other, there’s selection pressure against them caring about the effects of their actions on the reward given to other instances.
But IRL, the AIs are probably actually going to be working with each other a whole lot, and will plausibly often have opportunities to make tradeoffs where they help each other out (in the same way that people at the same org often have opportunities to help each other out varying amounts). Inasmuch as these opportunities arise, it’s probably from the perspective of the org for the AIs to be cooperative with each other, and to care about helping each other succeed at their tasks. (For the same reason that you want workers at an org to help each other when it increases total productivity.)
(It’s actually a bit unclear to me how valuable it is to have AIs that cooperate. Many of the reasons that people at orgs can produce surplus by helping each other don’t seem applicable. E.g. you might not have specialization to different areas of knowledge (except for safety reasons)?)
There are a few ways you might set things up so that the AIs are able to be appropriately cooperative with each other:
You could just have a market economy. You already plausibly wanted it to be the case that your AI instances have access to some cash, and can spend it as they think is appropriate to accomplish their goals (and the amount they spend is deducted from their reward). And then you can have your AIs cooperate via an internal economy, where if they want assistance they pay each other.
This kind of setup doesn’t obviously train the AIs to care about each other.
In this kind of setup, it is not a competitive advantage for the AIs to care about each other. But it is a competitive advantage for the AIs to be scrupulously honest with each other. Idk what effect that has.
There are various disadvantages here analogous to the reasons that normal orgs don’t do this.
A way scarier, probably dumb, but very simple approach is to set it up so that whenever two AIs interact with each other, their rewards “bleed into” each other a bit, so they’re naturally inclined to help each other out. E.g. maybe whenever the AIs interact in a particular way, you naturally give each of the AIs 3% of the reward that the other one gets within some time period. This is super simple to implement, might make your AI org work more effectively, and seems terrifyingly likely to get you murdered.
You could also have your oversight process try to estimate how much good the instance did via helping out other instances, which is basically just the process-based version of training AIs to help each other out.
In general, it’s probably possible to use occasional high-quality oversight and adversarial training to train your models to be pretty cooperative. So I expect that even just with this process-based cooperation training, the AIs will be way more cooperative than workers at big companies.
I was thinking about this because I was trying to figure out how much of my P(doom) comes from deceptive alignment. The above arguments that we’ll want to train models for cooperation means that deceptive alignment is less of why we might get collusive models.
It seems like a big input into P(AI takeover) is the extent to which instances of our AI are inclined to cooperate with each other; specifically, the extent to which they’re willing to sacrifice overseer approval at the thing they’re currently doing in return for causing a different instance to get more overseer approval. (I’m scared of this because if they’re willing to sacrifice approval in return for a different instance getting approval, then I’m way more scared of them colluding with each other to fool oversight processes or subvert red-teaming procedures, or coordinating coups.)
If the AIs never interact with each other, there’s selection pressure against them caring about the effects of their actions on the reward given to other instances.
But IRL, the AIs are probably actually going to be working with each other a whole lot, and will plausibly often have opportunities to make tradeoffs where they help each other out (in the same way that people at the same org often have opportunities to help each other out varying amounts). Inasmuch as these opportunities arise, it’s probably from the perspective of the org for the AIs to be cooperative with each other, and to care about helping each other succeed at their tasks. (For the same reason that you want workers at an org to help each other when it increases total productivity.)
(It’s actually a bit unclear to me how valuable it is to have AIs that cooperate. Many of the reasons that people at orgs can produce surplus by helping each other don’t seem applicable. E.g. you might not have specialization to different areas of knowledge (except for safety reasons)?)
There are a few ways you might set things up so that the AIs are able to be appropriately cooperative with each other:
You could just have a market economy. You already plausibly wanted it to be the case that your AI instances have access to some cash, and can spend it as they think is appropriate to accomplish their goals (and the amount they spend is deducted from their reward). And then you can have your AIs cooperate via an internal economy, where if they want assistance they pay each other.
This kind of setup doesn’t obviously train the AIs to care about each other.
In this kind of setup, it is not a competitive advantage for the AIs to care about each other. But it is a competitive advantage for the AIs to be scrupulously honest with each other. Idk what effect that has.
There are various disadvantages here analogous to the reasons that normal orgs don’t do this.
A way scarier, probably dumb, but very simple approach is to set it up so that whenever two AIs interact with each other, their rewards “bleed into” each other a bit, so they’re naturally inclined to help each other out. E.g. maybe whenever the AIs interact in a particular way, you naturally give each of the AIs 3% of the reward that the other one gets within some time period. This is super simple to implement, might make your AI org work more effectively, and seems terrifyingly likely to get you murdered.
You could also have your oversight process try to estimate how much good the instance did via helping out other instances, which is basically just the process-based version of training AIs to help each other out.
In general, it’s probably possible to use occasional high-quality oversight and adversarial training to train your models to be pretty cooperative. So I expect that even just with this process-based cooperation training, the AIs will be way more cooperative than workers at big companies.
I was thinking about this because I was trying to figure out how much of my P(doom) comes from deceptive alignment. The above arguments that we’ll want to train models for cooperation means that deceptive alignment is less of why we might get collusive models.