how much of tree success can be explained / replicated by interpretable models;
whether a similar analysis would work for neural nets?
You suggest that trees work so well because they let you charge ahead when you’ve misspecified your model. But in the biomedical/social domains ML is most often deployed, we are always misspecifying the model. Do you think your new GLM would offer similar idiotproofing?
Great post. Do you have a sense of
how much of tree success can be explained / replicated by interpretable models;
whether a similar analysis would work for neural nets?
You suggest that trees work so well because they let you charge ahead when you’ve misspecified your model. But in the biomedical/social domains ML is most often deployed, we are always misspecifying the model. Do you think your new GLM would offer similar idiotproofing?