To be clear, I only think this becomes obviously significant in a train-against-interpretability-tools context; if we’re not training against tools, the things I’m highlighting here seem intuitively highly unlikely. This still seems important, given that the combination [x is clearly a bad idea] and [x is the easiest path forward] does not robustly result in [nobody does x].
That said, I think I’d disagree on one word of the following:
The mechanisms I labelled as “strictly active” are the kind of strategy that it would be extremely improbable to implement successfully without some sort of coherent internal representations tothat help orchestrate the actions required to do it. This is true even if they’ve been selected for passively.
Any internal representations etc. need to result in the actions being orchestrated (to the extent necessary). They don’t need to be used for the purpose of orchestrating the actions. They don’t need to be [clearly selected for the purpose of orchestrating such actions].
In some cases this might be improbable without active, deliberate malign selection—but this would need to be argued for. In at least the case of [manipulating the interpreter], this doesn’t seem at all improbable, given the right selection pressure. (some manipulation seems obviously possible; whether it’s enough for a particular malign strategy would vary)
I haven’t thought about this for long, but in general the [this kind of thing ~must require active deliberation] intuition seems highly suspicious to me. All I need to set up a not-directly-deliberated-on malign strategy so that it’s implemented via benign cognition is a sufficient set of innocuous correlations (both of [innocuous action tends to have useful-for-malign-strategy effect] and [innocuous actions will tend to occur in thesepredictablepatterns]).
If I have unlimited passive selection pressure against tools that robustly detect active malign deliberation, but not ‘instinctive’ malign strategies, I’d strongly expect the outcome to be ‘instinctive’ active manipulation (etc.). (assuming of course that successful manipulation is instrumentally useful)
Whether I’d expect this in practice would come down to how much selection pressure were required for a particular [malign, active-with-no-malign-cognition strategy].
My current answer to this is not [this is extremely improbable]. It’s more like [This seems intuitively plausible; I have no idea on the probability so I’m going to take this possibility seriously until I see a non-handwaving argument that shows it to be extremely unlikely].
To be clear, I only think this becomes obviously significant in a train-against-interpretability-tools context; if we’re not training against tools, the things I’m highlighting here seem intuitively highly unlikely.
This still seems important, given that the combination [x is clearly a bad idea] and [x is the easiest path forward] does not robustly result in [nobody does x].
That said, I think I’d disagree on one word of the following:
Any internal representations etc. need to result in the actions being orchestrated (to the extent necessary). They don’t need to be used for the purpose of orchestrating the actions. They don’t need to be [clearly selected for the purpose of orchestrating such actions].
In some cases this might be improbable without active, deliberate malign selection—but this would need to be argued for. In at least the case of [manipulating the interpreter], this doesn’t seem at all improbable, given the right selection pressure. (some manipulation seems obviously possible; whether it’s enough for a particular malign strategy would vary)
I haven’t thought about this for long, but in general the [this kind of thing ~must require active deliberation] intuition seems highly suspicious to me. All I need to set up a not-directly-deliberated-on malign strategy so that it’s implemented via benign cognition is a sufficient set of innocuous correlations (both of [innocuous action tends to have useful-for-malign-strategy effect] and [innocuous actions will tend to occur in these predictable patterns]).
If I have unlimited passive selection pressure against tools that robustly detect active malign deliberation, but not ‘instinctive’ malign strategies, I’d strongly expect the outcome to be ‘instinctive’ active manipulation (etc.). (assuming of course that successful manipulation is instrumentally useful)
Whether I’d expect this in practice would come down to how much selection pressure were required for a particular [malign, active-with-no-malign-cognition strategy].
My current answer to this is not [this is extremely improbable]. It’s more like [This seems intuitively plausible; I have no idea on the probability so I’m going to take this possibility seriously until I see a non-handwaving argument that shows it to be extremely unlikely].