1⁄3 basically worst case scheming → 1⁄3 expected badness (as this is just what we are comparing to)
1⁄3 importantly non-worst-case scheming → much easier to control (and maybe somewhat less concerning preferences), but AI companies might not try basically at all on control implies this is maybe about half as bad as worst case scheming, 1⁄6 expected badness from this.
1⁄3 the AI is scheming for preferences that aren’t that bad → preferences aren’t that bad, but still powerseeking which is somewhat spooky. A range of possible outcomes seem possible and AI takeover still seems much worse than human control (IDK maybe this retains 1⁄3 value relative to reasonable human governance and is comparable to or a bit worse than authoritarian rule in expectation, but this also makes takeover less likely), so half badness of worst case for 1⁄6
1⁄3 + 1⁄6 + 1⁄6 = 2⁄3
Note that I’m not claiming that worst case scheming implies AI takeover, I’m just trying to give an easy way to do a rough conversion if people want to think about these numbers in terms of (basically) worst case scheming.
1⁄9 badness for the last bullet also seems pretty reasonable and is maybe a better guess (I didn’t try to estimate that precisely) and so the bottom line would be 1⁄3 + 1⁄6 + 1⁄9 = 61% rather than 67%.
If I thought AI companies would try hard on control, then the badness from easier to control AIs decreases relative to the badness on worst case scheming and would further reduce this.
I’d personally put the AI is scheming for preferences that aren’t that bad/value aligned preferences as closer to 1/9-1/12 at minimum, mostly because I’m more skeptical of human control being automatically way better than AI control assuming rough value alignment works out and generalizes.
No, it is supposed to be 2⁄3 (roughly).
My thinking was:
1⁄3 basically worst case scheming → 1⁄3 expected badness (as this is just what we are comparing to)
1⁄3 importantly non-worst-case scheming → much easier to control (and maybe somewhat less concerning preferences), but AI companies might not try basically at all on control implies this is maybe about half as bad as worst case scheming, 1⁄6 expected badness from this.
1⁄3 the AI is scheming for preferences that aren’t that bad → preferences aren’t that bad, but still powerseeking which is somewhat spooky. A range of possible outcomes seem possible and AI takeover still seems much worse than human control (IDK maybe this retains 1⁄3 value relative to reasonable human governance and is comparable to or a bit worse than authoritarian rule in expectation, but this also makes takeover less likely), so half badness of worst case for 1⁄6
1⁄3 + 1⁄6 + 1⁄6 = 2⁄3
Note that I’m not claiming that worst case scheming implies AI takeover, I’m just trying to give an easy way to do a rough conversion if people want to think about these numbers in terms of (basically) worst case scheming.
1⁄9 badness for the last bullet also seems pretty reasonable and is maybe a better guess (I didn’t try to estimate that precisely) and so the bottom line would be 1⁄3 + 1⁄6 + 1⁄9 = 61% rather than 67%.
If I thought AI companies would try hard on control, then the badness from easier to control AIs decreases relative to the badness on worst case scheming and would further reduce this.
Thanks for answering.
I’d personally put the AI is scheming for preferences that aren’t that bad/value aligned preferences as closer to 1/9-1/12 at minimum, mostly because I’m more skeptical of human control being automatically way better than AI control assuming rough value alignment works out and generalizes.