I don’t think “honesty” is what we are looking for.
We have a system which has successfully predicted “what I would say if asked” (for example) and now we want a system that will continue to do that. “What I would say” can be defined precisely in terms of particular physical observations (it’s the number provided as input to a particular program) while conditioning only on pseudorandom facts about the world (e.g. conditioning on my computer’s RNG, which we use to determine what queries get sent to the human). We really just want a system that will continue to make accurate predictions under the “common sense” understanding of reality (rather than e.g. believing we are in a simulation or some other malign skeptical hypothesis).
I don’t think that going through a model of cooperativeness with humans is likely to be the easiest way to specify this. I think one key observation to leverage, when lower-bounding the density, is that the agent is already using the desired concept instrumentally. For example, if it is malevolent, it is still reasoning about what the correct prediction would be in order to increase its influence. In some sense the “honest” agent is just a subset of the malicious reasoning, stopping at the honest goal rather than continuing to backwards chain. If we could pull out this instrumental concept, then it wouldn’t necessarily be the right thing, but at least the failures wouldn’t be malign.
If you have a model with 1 degree of freedom per step of computation,, then it seems like the “honest” agent is necessarily simpler simpler, since we can slice out the parts of computation that are operating on this instrumental goal. It might be useful to try and formalize this argument as a warmup.
(Note that e.g. a fully-connected neural net has this property; so while it’s kind of a silly example, it’s not totally out there.)
Incidentally, this style of argument also seems needed to address the malignity of the universal prior / logical inductor, at least if you want to run a theoretically convincing argument. I expect the same conceptual machinery will be used in both cases (though it may turn out that one is possible and the other is impossible). So I think this question is needed both for my agenda and MIRI’s agent foundations agenda, and advocate bumping it up in priority.
I don’t think “honesty” is what we are looking for.
We have a system which has successfully predicted “what I would say if asked” (for example) and now we want a system that will continue to do that. “What I would say” can be defined precisely in terms of particular physical observations (it’s the number provided as input to a particular program) while conditioning only on pseudorandom facts about the world (e.g. conditioning on my computer’s RNG, which we use to determine what queries get sent to the human). We really just want a system that will continue to make accurate predictions under the “common sense” understanding of reality (rather than e.g. believing we are in a simulation or some other malign skeptical hypothesis).
I don’t think that going through a model of cooperativeness with humans is likely to be the easiest way to specify this. I think one key observation to leverage, when lower-bounding the density, is that the agent is already using the desired concept instrumentally. For example, if it is malevolent, it is still reasoning about what the correct prediction would be in order to increase its influence. In some sense the “honest” agent is just a subset of the malicious reasoning, stopping at the honest goal rather than continuing to backwards chain. If we could pull out this instrumental concept, then it wouldn’t necessarily be the right thing, but at least the failures wouldn’t be malign.
If you have a model with 1 degree of freedom per step of computation,, then it seems like the “honest” agent is necessarily simpler simpler, since we can slice out the parts of computation that are operating on this instrumental goal. It might be useful to try and formalize this argument as a warmup.
(Note that e.g. a fully-connected neural net has this property; so while it’s kind of a silly example, it’s not totally out there.)
Incidentally, this style of argument also seems needed to address the malignity of the universal prior / logical inductor, at least if you want to run a theoretically convincing argument. I expect the same conceptual machinery will be used in both cases (though it may turn out that one is possible and the other is impossible). So I think this question is needed both for my agenda and MIRI’s agent foundations agenda, and advocate bumping it up in priority.