I think that my OP was in hindsight taking for granted that we have to analyze AIs as adversarial. I agree that you could theoretically have safety cases where you never need to reason about AIs as adversarial; I shouldn’t have ignored that possibility, thanks for pointing it out.
I think that my OP was in hindsight taking for granted that we have to analyze AIs as adversarial. I agree that you could theoretically have safety cases where you never need to reason about AIs as adversarial; I shouldn’t have ignored that possibility, thanks for pointing it out.