Congratulations to all the winners! And, as said before: bravo to the swift processing of all the proposals! 197 proposals, wow!
Question to those who had to read all those proposals: How about the category of proposals* that tried to identify the good from the bad reporter by hand (with some strategy) after training multiple reporters (post hoc instead of a priori)? Were there any good ones? Or can “identification by looking at it” (either in a simple or a complex way) be definitively ruled out (and as result, require all future efforts in research to be about creating a strategy that converges to a single point on the ridge of good solutions in parameter/function space)?
Edit: * to which “Use the reporter to define causal interventions on the predictor” sort of belongs
I think we’d basically declare victory if a human could look at 2 reporters (or a bunch of reporters), one of which is a direct translator, and tell which it was The action is just in “how does the human tell?” And the counterexample is “what if the human doesn’t think of anything clever, so they just have to use one of the current proposals (each of which has counterexamples).”
I didn’t see the proposals, but I think that almost all of the difficulty will be in how you can tell good from bad reporters by looking at them. If you have a precise enough description of how to do that, you can also use it as a regularizer. So the post hoc vs a priori thing you mention sounds more like a framing difference to me than fundamentally different categories. I’d guess that whether a proposal is promising depends mostly on how it tries to distinguish between the good and bad reporter, not whether it does so via regularization or via selection after training (since you can translate back and forth between those anyway).
(Though as a side note, I’d usually expect regularization to be much more efficient in practice, since if your training process has a bias towards the bad reporter, it might be hard to get any good reporters at all.)
If I’m mistaken, I’d be very interested to hear an example of a strategy that fundamentally only works once you have multiple trained models, rather than as a regularizer!
If you try to give feedback during training, there is a risk you’ll just reward it for being deceptive. One advantage to selecting post hoc is that you can avoid incentivizing deception.
I agree with you (both). It’s a framing difference iff you can translate back and forth. My thinking was that the problem might be setup so that it’s “easy” to recognize but difficult to implement. If you can define a strategy which sets it up to be easy to recognize, that is. Another way I thought about it, is that you can use your ‘meta’ knowledge about human imitators versus direct translators to give you a probability over all reporters. Approaching the problem not with certainty of a solution but with recognized uncertainty (I refrain from using ‘known uncertainty’ here, because knowing how much you don’t know something is hard).
I obviously don’t have a good strategy, else my name would be up above, haha. But to give it my attempt: One way to get this knowledge is to study for example the way NNs generalize and studying how likely it is that you create a direct translator versus an imitator. The proposal I send in used this knowledge and submitted a lot of reporters to random input. In the complex input space (outside the simple training set) translators should behave differently to imitators. Given that direct translators are created less frequently then imitators (which is what the report suggested), the smaller group or the outliers are more likely to be translators than imitators. Using this approach you can build evidence. That’s where I stopped, because time was up.
And that is why I was interested if there were any others using this approach that might be more successful than me(if there were any, they probably would be).
EDIT to add:
[...]I think that almost all of the difficulty will be in how you can tell good from bad reporters by looking at them.
I agree it is difficult. But it’s also difficult to have overparameterized models converge on a guaranteed single point on the ridge of all good solutions. So not having to do that would be great. But a conclusion of this contest could be that we have no other option, because we definitively have no way of recognizing good from bad.
In the complex input space (outside the simple training set) translators should behave differently to imitators. Given that direct translators are created less frequently then imitators (which is what the report suggested), the smaller group or the outliers are more likely to be translators than imitators.
It seems like there are a ton of clusters though—there are exponentially many ways of modifying the human simulator to give different behavior off distribution, each of which is more likely than the direct translator (enough more likely that there are a whole cluster of equivalent variations each of which is still individually more likely than all of the direct translators put together).
Thanks for your reply! You are right of course. The argument being more about building up evidence.
But, after thinking about it some more , I see now that any evidence gathered with a single known feature of priorless models (like the one I mentioned) would be so minuscule (approaching 0[1]) that you’d need to combine unlikely amounts of features[2]. You’d end up in (a worse version of) an arms race akin to the ‘science it’ example/counterexample scenario mentioned in the report and thus it’s a dead end. By extension, all priorless models with or without a good training regime[3], with or without a good loss-function[4] , with or without some form of regularization[4], all of them are out.
So, I think it’s an interesting dead end. It reduces possible solutions to ELK to solutions that reduce the the ridge of good solutions with a prior imposed by/on the architecture or statistical features of the model. In other words, it requires either non-overparameterized models or a way to reduce the ridge of good solutions in overparameterized models. Of the latter I have only seen good descriptions[5] but no solutions[6] (but do let me know me if I missed something).
Congratulations to all the winners!
And, as said before: bravo to the swift processing of all the proposals! 197 proposals, wow!
Question to those who had to read all those proposals:
How about the category of proposals* that tried to identify the good from the bad reporter by hand (with some strategy) after training multiple reporters (post hoc instead of a priori)? Were there any good ones? Or can “identification by looking at it” (either in a simple or a complex way) be definitively ruled out (and as result, require all future efforts in research to be about creating a strategy that converges to a single point on the ridge of good solutions in parameter/function space)?
Edit: * to which “Use the reporter to define causal interventions on the predictor” sort of belongs
I think we’d basically declare victory if a human could look at 2 reporters (or a bunch of reporters), one of which is a direct translator, and tell which it was The action is just in “how does the human tell?” And the counterexample is “what if the human doesn’t think of anything clever, so they just have to use one of the current proposals (each of which has counterexamples).”
I didn’t see the proposals, but I think that almost all of the difficulty will be in how you can tell good from bad reporters by looking at them. If you have a precise enough description of how to do that, you can also use it as a regularizer. So the post hoc vs a priori thing you mention sounds more like a framing difference to me than fundamentally different categories. I’d guess that whether a proposal is promising depends mostly on how it tries to distinguish between the good and bad reporter, not whether it does so via regularization or via selection after training (since you can translate back and forth between those anyway).
(Though as a side note, I’d usually expect regularization to be much more efficient in practice, since if your training process has a bias towards the bad reporter, it might be hard to get any good reporters at all.)
If I’m mistaken, I’d be very interested to hear an example of a strategy that fundamentally only works once you have multiple trained models, rather than as a regularizer!
If you try to give feedback during training, there is a risk you’ll just reward it for being deceptive. One advantage to selecting post hoc is that you can avoid incentivizing deception.
I agree with you (both). It’s a framing difference iff you can translate back and forth. My thinking was that the problem might be setup so that it’s “easy” to recognize but difficult to implement. If you can define a strategy which sets it up to be easy to recognize, that is.
Another way I thought about it, is that you can use your ‘meta’ knowledge about human imitators versus direct translators to give you a probability over all reporters. Approaching the problem not with certainty of a solution but with recognized uncertainty (I refrain from using ‘known uncertainty’ here, because knowing how much you don’t know something is hard).
I obviously don’t have a good strategy, else my name would be up above, haha.
But to give it my attempt: One way to get this knowledge is to study for example the way NNs generalize and studying how likely it is that you create a direct translator versus an imitator. The proposal I send in used this knowledge and submitted a lot of reporters to random input. In the complex input space (outside the simple training set) translators should behave differently to imitators. Given that direct translators are created less frequently then imitators (which is what the report suggested), the smaller group or the outliers are more likely to be translators than imitators.
Using this approach you can build evidence. That’s where I stopped, because time was up.
And that is why I was interested if there were any others using this approach that might be more successful than me(if there were any, they probably would be).
EDIT to add:
I agree it is difficult. But it’s also difficult to have overparameterized models converge on a guaranteed single point on the ridge of all good solutions. So not having to do that would be great. But a conclusion of this contest could be that we have no other option, because we definitively have no way of recognizing good from bad.
It seems like there are a ton of clusters though—there are exponentially many ways of modifying the human simulator to give different behavior off distribution, each of which is more likely than the direct translator (enough more likely that there are a whole cluster of equivalent variations each of which is still individually more likely than all of the direct translators put together).
Thanks for your reply!
You are right of course. The argument being more about building up evidence.
But, after thinking about it some more , I see now that any evidence gathered with a single known feature of priorless models (like the one I mentioned) would be so minuscule (approaching 0[1]) that you’d need to combine unlikely amounts of features[2]. You’d end up in (a worse version of) an arms race akin to the ‘science it’ example/counterexample scenario mentioned in the report and thus it’s a dead end. By extension, all priorless models with or without a good training regime[3], with or without a good loss-function[4] , with or without some form of regularization[4], all of them are out.
So, I think it’s an interesting dead end. It reduces possible solutions to ELK to solutions that reduce the the ridge of good solutions with a prior imposed by/on the architecture or statistical features of the model. In other words, it requires either non-overparameterized models or a way to reduce the ridge of good solutions in overparameterized models. Of the latter I have only seen good descriptions[5] but no solutions[6] (but do let me know me if I missed something).
Do you agree?
assuming a blackbox, overparameterized model
Like finding a single needle in a infinite haystack. Even if your evidence hacks it in half you’ll be hacking away a long time.
in the case of the ELK-contest, due to not being able to sample outside the human understandable
because they would depend on the same (assumption of) evidence/feature
like this one
I know you can see the initialization of parameters as a prior, but I haven’t seen a meaningful prior