We have a clearer sense of how heuristic estimators and heuristic arguments could capture different “reasons” for a phenomenon.
Informed by the example of cumulant propagation and Wick products, we have a clearer sense for how you might attribute an effect to a part of a heuristic argument.
We have a potential strategy for mechanistic anomaly detection by minimizing over “subsets” of a heuristic argument (and some ways of defining those).
Largely as a result of filling in those details, we’ve now converged on this as the core of our approach to ELK rather than just an intuitive description of what counts as an ELK counterexample.
Ultimately it’s not surprising that our approach ended up more closely tied to the problem statement itself, whereas approaches based on regularization are more reliant on some contingency. One way to view the change is that we were initially exploring simple options that didn’t require solving philosophical problems or building up complicated new machinery, and now we’ve given up on the easy outs and we’re just trying to formally define what it means for something to happen for the typical reason.
The approach in this post is quite similar to what we talked about in the “narrow elicitation” appendix of ELK, I found it pretty interesting to reread it today (and to compare the old strawberry appendix to the new strawberry appendix). The main changes over the last year are:
We have a clearer sense of how heuristic estimators and heuristic arguments could capture different “reasons” for a phenomenon.
Informed by the example of cumulant propagation and Wick products, we have a clearer sense for how you might attribute an effect to a part of a heuristic argument.
We have a potential strategy for mechanistic anomaly detection by minimizing over “subsets” of a heuristic argument (and some ways of defining those).
Largely as a result of filling in those details, we’ve now converged on this as the core of our approach to ELK rather than just an intuitive description of what counts as an ELK counterexample.
Ultimately it’s not surprising that our approach ended up more closely tied to the problem statement itself, whereas approaches based on regularization are more reliant on some contingency. One way to view the change is that we were initially exploring simple options that didn’t require solving philosophical problems or building up complicated new machinery, and now we’ve given up on the easy outs and we’re just trying to formally define what it means for something to happen for the typical reason.