I’m not the asker, but I think I get where they’re coming from. For a long time, linear and logistic regression were the king & queen of modeling. Then the introduction of non-linear functions like random forest and gradient boosters made us far more able to fit difficult data. So the original question has me wondering if there’s a similar possible gain in going from linearity to non-linearity in interpretability algorithms.
I’m not the asker, but I think I get where they’re coming from. For a long time, linear and logistic regression were the king & queen of modeling. Then the introduction of non-linear functions like random forest and gradient boosters made us far more able to fit difficult data. So the original question has me wondering if there’s a similar possible gain in going from linearity to non-linearity in interpretability algorithms.
Yep, that’s pretty much it, but with the added bonus of a concrete motivating example. Thanks!