1. Proposals should make superintelligences less likely to fight you by using some conceptual insight true in most cases.
2. With CIRL, this insight is “we want the AI to actively cooperate with humans”, so there’s real value from it being formalized in a paper.
3. In the counterfactual paper, there’s the insight “what if the AI thinks he’s not on but still learns”.
For the last bit, I have two interpretations:
4.a. However, it’s unclear that this design avoids all manipulative behaviour and is completely safe.
4.b. However, it’s unclear that adding the counterfactual feature to another design (e.g. CIRL) would make systems overall safer / would actually reduce manipulation incentives.
If I understand you correctly, there are actual insights from counterfactual oracles—the problem is that those might not be insights that would apply to a broad class of Alignment failures, but only to “engineered” cases of boxed oracle AIs (as opposed to CIRL where we might want AIs to be cooperative in general). Was it what you meant?
Does that summarize your comment?
If I understand you correctly, there are actual insights from counterfactual oracles—the problem is that those might not be insights that would apply to a broad class of Alignment failures, but only to “engineered” cases of boxed oracle AIs (as opposed to CIRL where we might want AIs to be cooperative in general). Was it what you meant?
It’s more like 4a. The line of thinking seems useful, but I’m not sure that it lands.