Can you explain this: “In Section: specificity we suggested penalizing reporters if they are consistent with many different reporters, which effectively allows us to use consistency to compress the predictor given the reporter.” What does it mean to “use consistency to compress the predictor given the reporter” and how does this connect to penalizing reporters if they are consistent with many different predictors?
Warning: this is not a part of the report I’m confident I understand all that well; I’m trying anyway and Paul/Mark can correct me if I messed something up here.
I think the idea here is like:
We assume there’s some actual true correspondence between the AI Bayes net and the human Bayes net (because they’re describing the same underlying reality that has diamonds and chairs and tables in it).
That means that if we have one of the Bayes nets, and the true correspondence, we should be able to use that rederive the other Bayes net. In particular the human Bayes net plus the true correspondence should let us reconstruct the AI Bayes net; false correspondences that just do inference from observations in the human Bayes net wouldn’t allow us to do this since they throw away all the intermediate info derived by the AI Bayes net.
If you assume that the human Bayes net plus the true correspondence are simpler than the AI Bayes net, then this “compresses” the AI Bayes net because you just wrote down a program that’s smaller than the AI Bayes net which “unfolds” into the AI Bayes net.
This is why the counterexample in that section focuses on the case where the AI Bayes net was already so simple to describe that there was nothing left to compress, and the human Bayes net + true correspondence had to be larger.
A different way of phrasing Ajeya’s response, which I think is roughly accurate, is that if you have a reporter that gives consistent answers to questions, you’ve learned a fact about the predictor, namely “the predictor was such that when it was paired with this reporter it gave consistent answers to questions.” if there were 8 predictor for which this fact was true then “it’s the [7th] predictor such that when it was paired with this reporter it gave consistent answers to questions” is enough information to uniquely determine the reporter, e.g. the previous fact + 3 additional bits was enough. if the predictor was 1000 bits, the fact that it was consistent with a reporter “saved” you 997 bits, compressing the predictor into 3 bits.
The hope is that maybe the honest reporter “depends” on larger parts of the predictor’s reasoning, so less predictors are consistent with it, so the fact that a predictor is consistent with the honest reporter allows you to compress the predictor more. As such, searching for reporters that most compressed the predictor would prefer the honest reporter. However, the best way for a reporter to compress a predictor is to simply memorize the entire thing, so if the predictor is simple enough and the gap between the complexity of the human-imitator and the direct translator is large enough, then the human-imitator+memorized predictor is the simplest thing that maximally compresses the predictor.
Can you explain this: “In Section: specificity we suggested penalizing reporters if they are consistent with many different reporters, which effectively allows us to use consistency to compress the predictor given the reporter.” What does it mean to “use consistency to compress the predictor given the reporter” and how does this connect to penalizing reporters if they are consistent with many different predictors?
Warning: this is not a part of the report I’m confident I understand all that well; I’m trying anyway and Paul/Mark can correct me if I messed something up here.
I think the idea here is like:
We assume there’s some actual true correspondence between the AI Bayes net and the human Bayes net (because they’re describing the same underlying reality that has diamonds and chairs and tables in it).
That means that if we have one of the Bayes nets, and the true correspondence, we should be able to use that rederive the other Bayes net. In particular the human Bayes net plus the true correspondence should let us reconstruct the AI Bayes net; false correspondences that just do inference from observations in the human Bayes net wouldn’t allow us to do this since they throw away all the intermediate info derived by the AI Bayes net.
If you assume that the human Bayes net plus the true correspondence are simpler than the AI Bayes net, then this “compresses” the AI Bayes net because you just wrote down a program that’s smaller than the AI Bayes net which “unfolds” into the AI Bayes net.
This is why the counterexample in that section focuses on the case where the AI Bayes net was already so simple to describe that there was nothing left to compress, and the human Bayes net + true correspondence had to be larger.
A different way of phrasing Ajeya’s response, which I think is roughly accurate, is that if you have a reporter that gives consistent answers to questions, you’ve learned a fact about the predictor, namely “the predictor was such that when it was paired with this reporter it gave consistent answers to questions.” if there were 8 predictor for which this fact was true then “it’s the [7th] predictor such that when it was paired with this reporter it gave consistent answers to questions” is enough information to uniquely determine the reporter, e.g. the previous fact + 3 additional bits was enough. if the predictor was 1000 bits, the fact that it was consistent with a reporter “saved” you 997 bits, compressing the predictor into 3 bits.
The hope is that maybe the honest reporter “depends” on larger parts of the predictor’s reasoning, so less predictors are consistent with it, so the fact that a predictor is consistent with the honest reporter allows you to compress the predictor more. As such, searching for reporters that most compressed the predictor would prefer the honest reporter. However, the best way for a reporter to compress a predictor is to simply memorize the entire thing, so if the predictor is simple enough and the gap between the complexity of the human-imitator and the direct translator is large enough, then the human-imitator+memorized predictor is the simplest thing that maximally compresses the predictor.