Wait, if Clip-maniac finds itself in a scenario where Clippy would achieve higher U then itself, the rational thing for it would be to self-modify into Clippy, and the Strong Form would still hold, wouldn’t it?
Well, yeah, if you specifically choose a crippled version of the high-U agent that is somehow unable to pursue the winning strategy, it will loose—but IMHO that’s not what the discussion here should be about.
The discussion here is about the strong form. Proving that a « terminal » agent is crippled is exactly what is needed to prove the strong form does not hold.
Maybe there is a better way to put it—SFOT holds for objective functions/environments that only depend on the agent I/O behavior. Once the agent itself is embodied, then yes, you can use all kinds of diagonal tricks to get weird counterexamples. Implications for alignment—yes, if your agent is fully explainable and you can transparently examine it’s workings, chances are that alignment is easier. But that is kind of obvious without having to use SFOT to reason about it.
My point is that SFOT likely never work in any environment relevant to AI Alignement, where such diagonal methods show any Agent with a fixed Objective Function is crippled by an adequate counter.
Therefore SFOT should not be used when exploring AI alignement.
Can SFOT hold in ad-hoc limited situations that do not represent the real world? Maybe, but that was not my point.
Finding one counter-example that shows SFOT does not hold in a specific setting (Clippy in my scenario) proves that it does not hold in general, which was my goal.
Wait, if Clip-maniac finds itself in a scenario where Clippy would achieve higher U then itself, the rational thing for it would be to self-modify into Clippy, and the Strong Form would still hold, wouldn’t it?
We can consider the « Stronger Strong Form » about « Eternally Terminal » Agents, which CANNOT change, does not hold, then :-)
Well, yeah, if you specifically choose a crippled version of the high-U agent that is somehow unable to pursue the winning strategy, it will loose—but IMHO that’s not what the discussion here should be about.
The discussion here is about the strong form. Proving that a « terminal » agent is crippled is exactly what is needed to prove the strong form does not hold.
Maybe there is a better way to put it—SFOT holds for objective functions/environments that only depend on the agent I/O behavior. Once the agent itself is embodied, then yes, you can use all kinds of diagonal tricks to get weird counterexamples. Implications for alignment—yes, if your agent is fully explainable and you can transparently examine it’s workings, chances are that alignment is easier. But that is kind of obvious without having to use SFOT to reason about it.
Edited to add: “diagonal tricks” above refers to things in the conceptual neighborhood of https://en.m.wikipedia.org/wiki/Diagonal_lemma
My point is that SFOT likely never work in any environment relevant to AI Alignement, where such diagonal methods show any Agent with a fixed Objective Function is crippled by an adequate counter.
Therefore SFOT should not be used when exploring AI alignement.
Can SFOT hold in ad-hoc limited situations that do not represent the real world? Maybe, but that was not my point.
Finding one counter-example that shows SFOT does not hold in a specific setting (Clippy in my scenario) proves that it does not hold in general, which was my goal.