Regarding privacy-preserving AI auditing, I notice this is an area where you really need to have a solution to adversarial robustness, given that the adversary is 1) a nationstate, 2) has complete knowledge of the auditor’s training process and probably weights (they couldn’t really agree to an inspection deal if they didn’t trust the auditors to give accurate reports) 3) knows and controls the data the auditor will be inspecting. 4) Never has to show it to you (if they pass the audit).
Given that you’re assuming computers can’t practically be secured (though I doubt that very much[1].), it seems unlikely that a pre-AGI AI auditor could be secured either in that situation.
Tech stacks in training and inference centers are shallow enough (or vertically integrated enough) to rewrite, and rewrites and formal verification becomes cheaper as math-coding agents improve. Hardware is routinely entirely replaced. Preventing proliferation of weights and techniques also requires ironclad security, so it’s very difficult to imagine the council successfully framing the acquisition of fully fortified computers as an illicit threatening behaviour and forbidding it.
It seems to think that we could stably sit at a level of security that’s enough to keep terrorists out but not enough to keep peers out, without existing efforts in conventional security bleeding over into full forrtification programmes.
Regarding privacy-preserving AI auditing, I notice this is an area where you really need to have a solution to adversarial robustness, given that the adversary is 1) a nationstate, 2) has complete knowledge of the auditor’s training process and probably weights (they couldn’t really agree to an inspection deal if they didn’t trust the auditors to give accurate reports) 3) knows and controls the data the auditor will be inspecting. 4) Never has to show it to you (if they pass the audit).
Given that you’re assuming computers can’t practically be secured (though I doubt that very much[1].), it seems unlikely that a pre-AGI AI auditor could be secured either in that situation.
Tech stacks in training and inference centers are shallow enough (or vertically integrated enough) to rewrite, and rewrites and formal verification becomes cheaper as math-coding agents improve. Hardware is routinely entirely replaced. Preventing proliferation of weights and techniques also requires ironclad security, so it’s very difficult to imagine the council successfully framing the acquisition of fully fortified computers as an illicit threatening behaviour and forbidding it.
It seems to think that we could stably sit at a level of security that’s enough to keep terrorists out but not enough to keep peers out, without existing efforts in conventional security bleeding over into full forrtification programmes.